qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
796,564 | <p><img src="https://i.stack.imgur.com/rLmA6.jpg" alt="enter image description here"></p>
<p>I don't get the answer to this problem, can somebody please tell me what the answer is. </p>
| afedder | 29,604 | <p><strong>Hint</strong>: Suppose some extension of a line segment has equation $y=mx+b$. Then any line that is perpendicular to this line (in other words, at a $90$ degree angle with it), has slope $-\frac{1}{m}$.</p>
|
276,310 | <p>I'm trying to eliminate variables in some fairly simple sets of equations. A typical example is: </p>
<p>$$ 9 x^2 + 18 xy + 9 y^2 - 32 = 256z$$
$$ 9 x^2 + 6 xy - 3 y^2 - 8 = 376z$$
$$ 9 x^2 - 6 xy + y^2 = 512z$$</p>
<p>I'd like to eliminate $x$ and $y$. Mathematica tells me that the answer is $ 161 z^2 -162z + 1 = 0 $. OK. Good.</p>
<p><strong>But how would I do this elimination manually (without Mathematica). I'm hoping that there is some fairly simple mechanical process that I can express in a few hundred lines of C code.</strong></p>
<p>I realize that general elimination procedures are pretty complex, but this sort of problem looks quite special and therefore easier (I hope). Roughly speaking, it's just a system of "linear" equations in the variables $x^2$, $xy$, $y^2$, and $z$. Is there some sort of diagonalization process that can be applied, for example ?</p>
<p>Edit:
From the proposed answer below, I see that things can be simplified by setting $p=3x+3y$ and $q = 3x-y$. Then the equations become:</p>
<p>$$p^2 = 32 + 256z$$
$$pq = 8 + 376z$$
$$q^2 = 512z$$
I can then eliminate $p$ and $q$. Is there always a linear transformation that simplifies the problem in this way? If there is, how do I compute it?</p>
| bubba | 31,744 | <p><strong>Clever Observant Solution (as in Mark's comment)</strong></p>
<p>Denote the right-hand sides by $a$, $b$, $c$. In other words, $a = 32+256z$, $b = 8 + 376z$, $c = 512z$. Also, let $p = 3x + 3y$ and $q = 3x - y$. Then the equations become simply:
$$ p^2 = a \quad ; \quad pq = b \quad ; \quad q^2 = c$$
From this, we get $b^2 - ac = 0$. Substituting for $a$, $b$, $c$ gives us the same result we got from Mathematica: $161z^2 -162z + 1 = 0$.</p>
<p><strong>Dumb Mechanical Solution (which is what I need)</strong></p>
<p>Let $M$ be the matrix
$$ M = \begin{pmatrix} 9&18&9\\9&6&\!\!\!\!-3\\9&\!\!\!\!-6&1\end{pmatrix}$$
and let $u = x^2$, $v = xy$, $w = y^2$. Then the original equations can be written as
$M.[u, v, w]^t = [a, b, c]^t$.</p>
<p>Using standard techniques (eigenvalues or Gaussian elimination), we can find a diagonal matrix $D$ and an invertible matrix $P$ such that $P^{-1}MP = D$. The solution of the system of equations is then $[u, v, w]^t = PD^{-1}P^{-1}[a, b, c]^t$. Setting $v^2 = uw$ again gives us the equation $161z^2 -162z + 1 = 0$.</p>
|
140,294 | <p>Generative adversarial networks (GAN) is regarded as one of "the most interesting idea in the last ten years in machine learning" by Yann LeCun. It can be used to generate photo-realistic images that are almost indistinguishable from the real ones.</p>
<p>GAN trains two competing neural networks: a generator network which generates image, and a discriminator network that distinguishes the generated image and the real training image. For example, the images shown below are generated by the network from the texts above them (taken from Han Zhang, etc., <a href="https://arxiv.org/pdf/1612.03242.pdf" rel="noreferrer">StackGAN: Text to Photo-realistic Image Synthesis
with Stacked Generative Adversarial Networks</a>).</p>
<p><a href="https://i.stack.imgur.com/r3L60.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/r3L60.jpg" alt="enter image description here"></a></p>
<p>I'm wondering whether we can implement a simplified version of that in <em>Mathematica</em>, given that the neural network framework has be enhanced greatly in version 11.1.</p>
| Taliesin Beynon | 7,140 | <p>Yes it is possible. You can do alternating training manually by literally following the algorithm, so that you have a Do loop whose body contains two calls to NetTrain, but that suffers from overhead at each alternation (this could be overcome with clever caching, but we haven't done that yet). An approximation of this is to build a single network and optimize the D and G losses simultaneously by using a negative learning rate for the generator. </p>
<p>I have prototyped this, but only on a toy example. </p>
<p>I encourage you to try how to do it, it didn't take us more than a few hours of playing around to make a simple GAN in which the data distribution is a gaussian, the discriminator is an MLP, and the generator is a single EmbeddingLayer (just a fixed set of samples that can be moved around by gradient updates).</p>
|
140,294 | <p>Generative adversarial networks (GAN) is regarded as one of "the most interesting idea in the last ten years in machine learning" by Yann LeCun. It can be used to generate photo-realistic images that are almost indistinguishable from the real ones.</p>
<p>GAN trains two competing neural networks: a generator network which generates image, and a discriminator network that distinguishes the generated image and the real training image. For example, the images shown below are generated by the network from the texts above them (taken from Han Zhang, etc., <a href="https://arxiv.org/pdf/1612.03242.pdf" rel="noreferrer">StackGAN: Text to Photo-realistic Image Synthesis
with Stacked Generative Adversarial Networks</a>).</p>
<p><a href="https://i.stack.imgur.com/r3L60.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/r3L60.jpg" alt="enter image description here"></a></p>
<p>I'm wondering whether we can implement a simplified version of that in <em>Mathematica</em>, given that the neural network framework has be enhanced greatly in version 11.1.</p>
| Anton Antonov | 34,008 | <p>I see the generation of images from classification models as a trivial idea, largely, because this kind of processes and algorithms are standard in natural language processing. </p>
<p>More to the point of the question, the following posts show generation of images with classification derived bases:</p>
<ul>
<li><p><a href="https://mathematica.stackexchange.com/a/137519/34008">an answer to "Code that generates a mandala"</a></p></li>
<li><p><a href="https://mathematicaforprediction.wordpress.com/2017/02/10/comparison-of-dimension-reduction-algorithms-over-mandala-images-generation/" rel="noreferrer">"Comparison of dimension reduction algorithms over mandala images generation"</a></p></li>
</ul>
<p>In some sense when using SVD or NNMF bases to recognize an image we reconstruct it by appropriate overlaying of basis images. Obviously such overlaying can be done without a recognition goal just to generate new images.</p>
<h2>Update, 2017-06-24</h2>
<p>Looking at <a href="https://mathematica.stackexchange.com/a/141277/34008">the answer of Michael Curry</a> and running the code (and Taliesin Beynon's code) I kind of see using Neural Networks as some sort of a long route. The MNIST based images those codes generate can be generated in a much quicker and controllable way using SVD and NNMF.</p>
<p>As an example, examine this basis images of handwriting "5" obtained with NNMF:</p>
<p><a href="https://i.stack.imgur.com/VprFk.png" rel="noreferrer"><img src="https://i.stack.imgur.com/VprFk.png" alt="enter image description here"></a></p>
<p>With those kind of bases are generated the (re)constructed handwritten digit images from <a href="https://mathematica.stackexchange.com/a/114565/34008">this MSE answer</a>:</p>
<p><a href="https://i.stack.imgur.com/otB5r.png" rel="noreferrer"><img src="https://i.stack.imgur.com/otB5r.png" alt="enter image description here"></a></p>
<p>(The linked answer describes the generation procedure.)</p>
|
140,294 | <p>Generative adversarial networks (GAN) is regarded as one of "the most interesting idea in the last ten years in machine learning" by Yann LeCun. It can be used to generate photo-realistic images that are almost indistinguishable from the real ones.</p>
<p>GAN trains two competing neural networks: a generator network which generates image, and a discriminator network that distinguishes the generated image and the real training image. For example, the images shown below are generated by the network from the texts above them (taken from Han Zhang, etc., <a href="https://arxiv.org/pdf/1612.03242.pdf" rel="noreferrer">StackGAN: Text to Photo-realistic Image Synthesis
with Stacked Generative Adversarial Networks</a>).</p>
<p><a href="https://i.stack.imgur.com/r3L60.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/r3L60.jpg" alt="enter image description here"></a></p>
<p>I'm wondering whether we can implement a simplified version of that in <em>Mathematica</em>, given that the neural network framework has be enhanced greatly in version 11.1.</p>
| Michael Curry | 26,564 | <p>I wrestled with this for a while and got some kind of results, but nowhere near the great performance for which GANs are famous. Ultimately, they're absurdly sensitive to hyperparameters and initialization, and if you don't <em>exactly</em> imitate the published settings, you are unlikely to get good results.</p>
<p>I figure I should post my attempt -- maybe the community can figure out a good set of parameters that works. Mine sort of trained, but suffered from mode collapse and often converged on a blob. However, for sets of all one digit, it did seem to work okay, although this is much easier and not really where GANs have any advantage.</p>
<p>I tried to implement the <a href="https://github.com/martinarjovsky/WassersteinGAN" rel="noreferrer">Wasserstein GAN</a> (paper available <a href="https://arxiv.org/abs/1701.07875" rel="noreferrer">here</a>) to generate MNIST digits. The training procedure is to update the discriminator on a batch 5 times for every 1 time you show it to the generator. Because Mathematica doesn't yet allow preservation of optimizer parameters between calls to NetTrain, I couldn't get this to work. Instead, I trained the networks jointly as suggested by Taliesin Beynon, setting the learning rate on the generator to something like -1/5, because it seemed like a plausible approximation.</p>
<p>The paper also used RMSProp as an optimizer. Mathematica has an RMSProp option, but on the net I defined it immediately diverged no matter what learning rate I chose. I used ADAM instead.</p>
<p>To begin, let's get a big batch of MNIST digits.</p>
<pre><code>mnist = ResourceData["MNIST"];
mnistDigits = First /@ mnist;
</code></pre>
<p>Let's give a 10-dimensional noise input to the generator, and define the generator and discriminator. Notice that the discriminator does not have an activation on its output -- this is specific to the WGAN, a normal GAN would have a LogisticSigmoid or something.</p>
<pre><code>randomDim = 10;
generator =
NetChain[{128, Ramp, 128, Ramp, 28*28, LogisticSigmoid,
ReshapeLayer[{1, 28, 28}]}, "Input" -> randomDim]
discriminator =
NetChain[{128, Ramp, 128, Ramp, 128, Ramp, 1},
"Input" -> {1, 28, 28}]
</code></pre>
<p>Now the tricky part. We'll feed noise into the generator to produce a fake image, and also accept a real image as input. We want to apply the discriminator to both images, but with one set of weights, so we concatenate them and use NetMapOperator. Then, the loss function should be to maximize the score on the real image while minimizing the score on the fake image, so we negate the real score and then add them.</p>
<pre><code>wganNet =
NetInitialize[
NetGraph[<|"gen" -> generator,
"discrimop" -> NetMapOperator[discriminator],
"cat" -> CatenateLayer[],
"reshape" -> ReshapeLayer[{2, 1, 28, 28}],
"flat" -> FlattenLayer[], "total" -> SummationLayer[],
"scale" ->
ConstantTimesLayer["Scaling" -> {-1, 1}]|>, {NetPort["random"] ->
"gen" -> "cat", NetPort["Input"] -> "cat",
"cat" ->
"reshape" -> "discrimop" -> "flat" -> "scale" -> "total"},
"Input" -> {1, 28, 28}]]
</code></pre>
<p>One of the strengths of Mathematica's neural networks framework is that it's really easy to watch the networks train. We'll feed the trainer a progress function that takes 4 fixed random inputs and shows the generator's output, so we can watch the generator evolve over time.</p>
<pre><code>ClearAll[progressFuncCreator]
progressFuncCreator[rands_List] :=
Function[{reals},
ImageResize[
NetDecoder[{"Image", "Grayscale"}][
NetExtract[#Net, "gen"][reals]], 50]] /@ rands &
</code></pre>
<p>Finally, create the training data:</p>
<pre><code>trainingData = <|"random" -> RandomReal[{-1, 1}, {randomDim}],
"Input" -> ArrayReshape[ImageData[#], {1, 28, 28}]|> & /@
mnistDigits;
</code></pre>
<p>And train, watching the generator make a bunch of vaguely number-shaped blobs. Notice the "WeightClipping" option on the discriminator -- this is the "secret sauce" in Wasserstein GANs that makes them learn an approximation of the Wasserstein/Earth-Mover's distance as opposed to the Jensen-Shannon distance, as explained in the paper.</p>
<pre><code>NetTrain[wganNet, trainingData, "Output",
Method -> {"ADAM", "Beta1" -> 0.5, "LearningRate" -> 0.00005,
"WeightClipping" -> {"discrimop" -> 0.01}},
TrainingProgressReporting ->
progressFuncCreator[Table[RandomReal[{-1, 1}, {randomDim}], 4]],
LearningRateMultipliers -> {"scale" -> 0, "gen" -> -0.2},
TargetDevice -> "GPU", BatchSize -> 64]
</code></pre>
<p>Overall, my impression of the neural networks framework is <em>very</em> good. It's extremely flexible, coherently designed, and also extremely pretty. Crucially, it's easier to watch your net train than in any other framework. However, due to difficulties with staged training/saving optimizer parameters, it's not yet possible to replicate (in the sense of replicating a scientific experiment) some published results, like GANs, that use weirder architectures.</p>
|
23,674 | <p>Let $v$ be the 3-adic valuation on $\mathbb{Q}$ and consider the subring $\mathbb{Z}_{(3)}$ of $\mathbb{Q}$ defined by
$$
\mathbb{Z}_{(3)} = \{ x \in \mathbb{Q} : v(x) \geq 0 \}.
$$
That is, $\mathbb{Z}_{(3)}$ is the ring of rational numbers that are integral with respect to $v$. $\mathbb{Z}_{(3)}$ is also the localization of $\mathbb{Z}$ at the prime ideal $(3)$. I know $\mathbb{Z}_{(3)}$ is integrally closed in $\mathbb{Q}$.</p>
<p>I want to find the integral closure of $\mathfrak{O}$ in the field $\mathbb{Q}(\sqrt{-5})$:
$$
\overline{\mathbb{Z}_{(3)}} = \{x \in \mathbb{Q}(\sqrt{-5}) : x \text{ is a root of a monic irreducible polynomial with coefficients in } \mathbb{Z}_{(3)} \}
$$</p>
<p>How can I do this? What should I be thinking about?</p>
| Pete L. Clark | 299 | <p>Here is a different and (perhaps) somewhat more elementary approach than Andrea's.</p>
<p>Let <span class="math-container">$R$</span> be the integral closure of <span class="math-container">$\mathbb{Z}_{(3)}$</span> in <span class="math-container">$\mathbb{Q}(\sqrt{-5})$</span>. Clearly <span class="math-container">$R$</span> is a subring of <span class="math-container">$\mathbb{Q}(\sqrt{-5})$</span>, and thus we are looking for a necessary and sufficient condition on <span class="math-container">$a,b \in \mathbb{Q}$</span> such that <span class="math-container">$a+b\sqrt{-5} \in R$</span>.</p>
<p>Let <span class="math-container">$P(a,b)$</span> be the minimal polynomial of <span class="math-container">$a+b\sqrt{-5}$</span> over <span class="math-container">$\mathbb{Q}$</span>, i.e., the unique monic polynomial with <span class="math-container">$\mathbb{Q}$</span>-coefficients of least degree satisfied by <span class="math-container">$a+b\sqrt{-5}$</span>. Certainly if <span class="math-container">$P(a,b)$</span> has coefficients lying in <span class="math-container">$\mathbb{Z}_{(3)}$</span> then <span class="math-container">$a+b\sqrt{-5}$</span> lies in <span class="math-container">$R$</span>. In fact the converse is true because the ring <span class="math-container">$\mathbb{Z}_{(3)}$</span> is integrally closed (e.g. it is a PID and PID <span class="math-container">$\implies$</span> UFD <span class="math-container">$\implies$</span> integrally closed). For a proof of this fact, see e.g. the section on Integrally Closed Domains in <a href="http://alpha.math.uga.edu/%7Epete/integral.pdf" rel="nofollow noreferrer">these notes</a>. (Currently this is Theorem <span class="math-container">$260$</span> in <span class="math-container">$\S 14.5$</span>, but that more precise information is subject to change.)</p>
<p>Try out this computation for yourself: you should get that <span class="math-container">$a+b\sqrt{-5} \in R \iff a,b \in \mathbb{Z}_{(3)}$</span>.</p>
<p>Now in my notes I also prove the compatibility of integral closure with localization which Andrea uses in his answer: in fact it comes about five pages earlier than the fact about minimal polynomials mentioned above. So it is certainly arguable which of these facts is "more basic". My reason for choosing the latter is because I think it is a little more concrete and amenable to computation: indeed, if you don't know this fact about minimal polynomials, it seems to me that you're going to have a hard time computing any nontrivial examples of integral closures whatsoever.</p>
|
136,363 | <p>Consider the following case:</p>
<pre><code>(a^3*b) //. {a^2 -> c, a*b -> d}
</code></pre>
<p>instead of <code>c d</code> the output is:</p>
<pre><code>(*a^3*b*)
</code></pre>
<p>How can I get what I want?</p>
| corey979 | 22,013 | <p>How about</p>
<pre><code>Last@PolynomialReduce[a^3*b, {a^2 - c, a*b - d}, {a, b}]
</code></pre>
<blockquote>
<p>c d</p>
</blockquote>
<p>Or a more tricky</p>
<pre><code>a^3*b //. {a^n_ :> c*Quotient[n, 2]*a^Mod[n, 2],
Times[r1___, a, b, r2___] :> Times[d, r1, r2]}
</code></pre>
<blockquote>
<p>c d</p>
</blockquote>
|
2,583,454 | <p>Consider for instance the linear system:</p>
<p>$$\left(
\begin{array}{cc}
1 & 2 \\
3 & 4 \\
5 & 6 \\
\end{array}
\right).\left(
\begin{array}{c}
x \\
y \\
\end{array}
\right)=\left(
\begin{array}{c}
1 \\
2 \\
4 \\
\end{array}
\right)$$</p>
<p>This is over determined and thus has no solution. Yet, by simply multiplying both sides by $\textbf{A}^T$:</p>
<p>$$\left(
\begin{array}{ccc}
1 & 3 & 5 \\
2 & 4 & 6 \\
\end{array}
\right).\left(
\begin{array}{cc}
1 & 2 \\
3 & 4 \\
5 & 6 \\
\end{array}
\right).\left(
\begin{array}{c}
x \\
y \\
\end{array}
\right)=\left(
\begin{array}{ccc}
1 & 3 & 5 \\
2 & 4 & 6 \\
\end{array}
\right).\left(
\begin{array}{c}
1 \\
2 \\
4 \\
\end{array}
\right)$$</p>
<p>We find that the system now has a unique solution, which is the (x,y) that minimizes the squared error.</p>
<p>Now I understand the derivation of why multiplying by the transpose helps to find the pseudoinverse which then helps to perform OLS regression, but my question is perhaps a bit more fundamental. </p>
<p>How can multiplying both sides of an equation by a matrix change a system which previously had no solutions into one that has a unique solution? This seems to against what I assumed that the solutions to $\textbf{A}x = \textbf{B}$ were the same as the solutions to $\textbf{P}\textbf{A}x = \textbf{P}\textbf{B}$.</p>
| dxiv | 291,201 | <blockquote>
<p>I assumed that the solutions to $\textbf{A}x = \textbf{B}$ were the same as the solutions to $\textbf{P}\textbf{A}x = \textbf{P}\textbf{B}$.</p>
</blockquote>
<p>The direct implication $\textbf{A}x = \textbf{B} \implies \textbf{P}\textbf{A}x = \textbf{P}\textbf{B}$ holds for any $\textbf{P}$. However, the reverse implication $\textbf{P}\textbf{A}x = \textbf{P}\textbf{B} \implies \textbf{A}x = \textbf{B}$ <em>only</em> holds if $\textbf{P}$ has a left inverse $\textbf{P}^{-1}$ (because you can then multiply with $\textbf{P}^{-1}$ on the left to derive $\textbf{A}x = \textbf{B}$). But in this case, $\textbf{P}$ is a rectangular matrix with more columns than rows which has no left inverse.</p>
|
24,230 | <p>$f$ is continuous between $[0,1]$, and $f(0)=f(1)$.</p>
<p>I want to prove that there is an $a \in [0,0.5]$ such that $f(a+0.5)=f(a)$.</p>
<p>ok, so Rolle's theorem can be useful here, but I can't see the connection to the derivative,</p>
<p>(Weierstrass, Uniform continuity?) I'll be glad to instructions.</p>
<p>Thanks.</p>
| Yuval Filmus | 1,277 | <p>Consider the function $g(a) = f(a+0.5) - f(a)$ on the interval $[0,0.5]$, and use the intermediate value theorem.</p>
|
3,339,780 | <p>The popular definition of a vector is </p>
<blockquote>
<p>A vector is an object that has both a magnitude and <strong>a</strong> direction.</p>
</blockquote>
<p>We know that zero vector has no specific <strong>single</strong> direction.</p>
<p>Then how can it be a vector?</p>
| David G. Stork | 210,401 | <p><span class="math-container">$0 \cdot (a,b,c)$</span> (for arbitrary <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, and <span class="math-container">$c$</span>) has a magnitude (<span class="math-container">$0$</span>) and a direction <span class="math-container">$(a,b,c)$</span>. It is a vector. </p>
<p><span class="math-container">$0 \cdot (d,e,f)$</span> (for arbitrary <span class="math-container">$d$</span>, <span class="math-container">$e$</span> and <span class="math-container">$f$</span>) has a magnitude (<span class="math-container">$0$</span>) and a direction <span class="math-container">$(d,e,f$</span>). It is a vector.</p>
<p>These happen to be the same vector.</p>
<p>So what?</p>
|
2,474,355 | <p>I have this system of linear equations.</p>
<p><em>2x<sub>1</sub> - 4x<sub>2</sub> - x<sub>3</sub> = 1</em></p>
<p><em>x<sub>1</sub> - 3x<sub>2</sub> + x<sub>3</sub> = 1</em></p>
<p><em>3x<sub>1</sub> - 5x<sub>2</sub> - 3x<sub>3</sub> = 1</em></p>
<p>What is the best way or is there any special way to solve this sort of system?</p>
| DanielWainfleet | 254,665 | <p>A topology $T$ on a set $S$ has a closure operator $Cl_T(A)$ for $A\subset S$, which can be regarded as a function from the set of all subsets of $S$ to the set of all $T$-closed sets. If $T_1,T_2$ are topologies on $S$ and $Cl_{T_1}=Cl_{T_2}$ then the set of $T_1$-closed sets equals the set of $T_2$-closed sets, so $T_1=T_2.$ </p>
<p>If $T_d$ is the topology generated by a metric $d$ then $Cl_{T_d}(A)$ is the set of points that are limits (with respect to $d$) of sequences of member(s) of $A.$ So metrics $d,e$ on $S$ generate the same topology iff $Cl_{T_d}=Cl_{T_e}$ iff the set of $d$-convergent sequences equals the set of $e$-convergent sequences. </p>
<p>When $d,e$ are equivalent metrics (i.e. $T_d=T_e$) there may be a sequence which is $d$-Cauchy but not $e$-Cauchy. Such a sequence could not converge with respect to either $d$ or $e$. </p>
<p>For example let $S=\Bbb R$ with the usual topology. Let $f:\Bbb R\to (-1,1)$ be a continuous and strictly monotonic surjection. Let $e(x,y)=|x-y|$ and $d(x,y)=|f(x)-f(y)|.$ Then the sequence $(n)_{n\in\Bbb N}$ is $d$-Cauchy but not $e$-Cauchy. (Caution: Although the sequence $(f(n))_{n\in \Bbb N}$ is converging in $\Bbb R$ to $1$, there is no $x\in S$ for which $f(x)=1$ and there is no $x\in S$ for which $\lim_{n\to \infty} d(n,x)=0$.)</p>
<p>When $md\leq e\leq Md$ for some positive $m,M$ the metrics $d,e$ are called uniformly equivalent. Uniformly equivalent metrics are equivalent. In the example above, $d,e$ are equivalent but not uniformly equivalent.</p>
<p>A common textbook example for the function $f$ in the example above is $f(x)=\frac {2}{\pi} \arctan (x).$</p>
|
501,512 | <p>The lines CD and EF are perpendicular with points $C(1,2)$, $D(3,-4)$, $E(-2,5)$, and $F(k,4)$. Find the value of the constant $k$.</p>
| njguliyev | 90,209 | <p>Hint: $(2, -6)\cdot(k+2,-1)=0$.</p>
|
3,984,480 | <p>Show that <span class="math-container">$-\vec{0} = \vec{0}$</span> in any vector space.</p>
<p>I know this is a seemingly obvious statement but is the following justification correct:</p>
<p>Assume <span class="math-container">$-\vec{0} \neq \vec{0}$</span>.</p>
<p><span class="math-container">$$(4): \vec{u} + \vec{0} = \vec{u}$$</span>
<span class="math-container">$$ \vec{u} + \vec{0} + (-\vec{0}) = \vec{u} + (-\vec{0})$$</span>
<span class="math-container">$$(5): \vec{u} -\vec{u} + \vec{0} + (-\vec{0}) = \vec{u} - \vec{u} + (-\vec{0})$$</span>
<span class="math-container">$$(5): \vec{0} + (-\vec{0}) = (-\vec{0}) \rightarrow -\vec{0} + \vec{0} = \vec{0}$$</span></p>
<p>Hence we clearly see that <span class="math-container">$-\vec{0} = \vec{0}$</span> QED Is this reasoning correct?</p>
| Moishe Kohan | 84,907 | <p>For every finite generating set <span class="math-container">$S$</span> of <span class="math-container">$G=Z^n$</span>, the generating function of its growth
<span class="math-container">$$
\sum_{n=0}^\infty \beta_S(n)t^n
$$</span>
is rational. Here <span class="math-container">$\beta_S(n)$</span> is the number of group elements <span class="math-container">$g\in G$</span> of norm <span class="math-container">$|g|\le n$</span> and <span class="math-container">$|g|$</span> is the distance from <span class="math-container">$g$</span> to <span class="math-container">$0$</span> in the Cayley graph of <span class="math-container">$G$</span> associated with <span class="math-container">$S$</span>.</p>
<p>This also holds for <em>virtually abelian</em> groups (i.e. groups containing <span class="math-container">$Z^n$</span> as a finite index subgroup) and may other groups. However, surprisingly, rationality becomes tricky for nilpotent groups: It depends on the generating set!</p>
<p><em>Benson, M.</em>, <a href="http://dx.doi.org/10.1007/BF01394026" rel="nofollow noreferrer"><strong>Growth series of finite extensions of <span class="math-container">$Z^n$</span> are rational</strong></a>, Invent. Math. 73, 251-269 (1983). <a href="https://zbmath.org/?q=an:0498.20022" rel="nofollow noreferrer">ZBL0498.20022</a>.</p>
<p><em>Stoll, M.</em>, <a href="http://dx.doi.org/10.1007/s002220050090" rel="nofollow noreferrer"><strong>Rational and transcendental growth series for the higher Heisenberg groups</strong></a>, Invent. Math. 126, No. 1, 85-109 (1996). <a href="https://zbmath.org/?q=an:0869.20018" rel="nofollow noreferrer">ZBL0869.20018</a>.</p>
|
1,371,580 | <p>i'm reading "A concise introduction ti pure mathematics" by Liebeck and in the exercises of the second chapter i found this question:</p>
<p>"Show that the decimal expression for $\sqrt 2 $ is not periodic"</p>
<p>If i write $\sqrt 2$ in its decimal form, i should obtain something like:</p>
<p>$\sqrt 2 = {a_o}.{a_1}{a_2}{a_3}....{a_n}$</p>
<p>But how can i prove that there is not a string of periodic numbers in ${a_1}{a_2}{a_3}...{a_n}$?</p>
<p>Should i prove this by contradiction?</p>
<p>Thanks a lot for your help and excuse any grammatical mistakes i could have committed, English is not my born language.</p>
| DanielWainfleet | 254,665 | <p>If x has an expansion ,in any base B, which is eventually periodic, with period length P, then multiplying x by the Pth power of B and subtracting x, yields a rational value for (B**P - 1)x. Hence x is rational. Conversely,if x is rational, then computing its representation in base B by long division eventually gives a remainder that occurred before, as their are only B possible remainders; after that it repeats periodically.</p>
|
1,339,875 | <p>Suppose we are given a function
$$g\left ( x \right )= \sum_{n=1}^{\infty}\frac{\sin \left ( nx \right )}{10^{n}\sin \left ( x \right )},x\neq k\pi , k\in\mathbb{Z}$$
and
$$g\left ( k\pi \right )=\lim _{x\rightarrow k\pi}g\left ( x \right )$$
I found that $\lim _{x\rightarrow k\pi}g\left ( x \right )= \sum_{n=1}^{\infty}\frac{1}{10^{n}}$ for both odd and even $k$. However, I am still unsure how to proceed from here. What does continuity even mean for functions like this? If I wanted to take the derivative of this function, surely I must prove first that it converges and then I would apply differentiation term by term.
Proving periodicity for one term would be easy enough if it weren't for the $10^{n}$ term in the denominator. How to deal with it?
<em>EDIT</em>: How to find the Fourier series of this function( by first proving that Dirichlet's conditions are met)</p>
| Community | -1 | <p>You should first examine the series
<p>
$\sum\limits_{n = 1}^\infty {\frac{{\sin (nx)}}{{{{10}^n}}}} $
and check that this is a continuous function.<p>
Looking at the form of the series, think of geometric series
<p>
$f(z) = \sum\limits_{n = 1}^\infty {\frac{{{z^n}}}{{{{10}^n}}}} $ .
Then by the theory of power series, $f(z)$ converges in a disk of radius 10.
Therefore, it converges uniformly to a differentiable function in the closed unit disk.</p>
<p>Now $\sum\limits_{n = 1}^\infty {\frac{{\sin (nx)}}{{{{10}^n}}}} $ is just the imaginary part of $f(z)$, i.e., $Im(f(z))$ on the unit circle.
Now <p>
$f(z) = \sum\limits_{n = 1}^\infty {\frac{{{z^n}}}{{{{10}^n}}}} = \frac{1}{{1 - {\textstyle{z \over {10}}}}} - 1 = \frac{z}{{10 - z}}$ and so
<p>
$f({e^{ix}}) = \frac{{{e^{ix}}}}{{10 - {e^{ix}}}} = \frac{{10\cos (x) - 1}}{{101 - 20\cos (x)}} + i\frac{{10\sin (x)}}{{101 - 20\cos (x)}}$.</p>
<p>Thus $\sum\limits_{n = 1}^\infty {\frac{{\sin (nx)}}{{{{10}^n}}}} $
converges uniformly to the differentiable function <p>
$\frac{{10\sin (x)}}{{101 - 20\cos (x)}}$ on the whole real line.</p>
<p>Therefore, for $x$ not a multiple of $\pi $, <p>
$g(x) = \frac{{10}}{{101 - 20\cos (x)}}$ . </p>
<p>Since
$\mathop {\lim }\limits_{x \to k\pi } g(x) = \mathop {\lim }\limits_{x \to k\pi } \frac{{10}}{{101 - 20\cos (x)}} = \frac{{10}}{{101 - 20\cos (k\pi )}}$ ,
we may consider $g(x)$ as given by
$g(x) = \frac{{10}}{{101 - 20\cos (x)}}$ by giving it the value of its limit at all multiple of $\pi $.
Thus, taking the limit or not,
$g(2k\pi ) = \frac{{10}}{{101 - 20\cos (2k\pi )}} = \frac{{10}}{{81}}$ and
$g((2k + 1)\pi ) = \frac{{10}}{{101 - 20\cos ((2k + 1)\pi )}} = \frac{{10}}{{121}}$ .
<p>You can now use $g(x)$ in this form to find its Fourier series.
<p>g is an even function of period $ 2 \pi $. So it has a cosine Fourier series with coefficients $ {a_n}$ given by<p>
${a_n} = \frac{1}{\pi }\int_0^{2\pi } {\frac{{10\cos (nx)}}{{101 - 20\cos (x)}}dx} = \frac{2}{{99 \times {{10}^{n - 1}}}}$ , for $ n = 0, 1 ,2, ... $<p>
You can use complex contour integration to evaluate the integrals.</p>
|
1,339,875 | <p>Suppose we are given a function
$$g\left ( x \right )= \sum_{n=1}^{\infty}\frac{\sin \left ( nx \right )}{10^{n}\sin \left ( x \right )},x\neq k\pi , k\in\mathbb{Z}$$
and
$$g\left ( k\pi \right )=\lim _{x\rightarrow k\pi}g\left ( x \right )$$
I found that $\lim _{x\rightarrow k\pi}g\left ( x \right )= \sum_{n=1}^{\infty}\frac{1}{10^{n}}$ for both odd and even $k$. However, I am still unsure how to proceed from here. What does continuity even mean for functions like this? If I wanted to take the derivative of this function, surely I must prove first that it converges and then I would apply differentiation term by term.
Proving periodicity for one term would be easy enough if it weren't for the $10^{n}$ term in the denominator. How to deal with it?
<em>EDIT</em>: How to find the Fourier series of this function( by first proving that Dirichlet's conditions are met)</p>
| fho | 225,297 | <p>First check that function converge uniformly. It sure does, on this union: $x \in I = \bigcup_{k=-\infty}^{\infty} (\pi k,\, \pi (k+1))$ because:
$$
\sum_{n=1}^{\infty} \frac{1}{10^n} \frac{\sin(nx)}{\sin(x)} \le \sum_{n=1}^{\infty} \frac{1}{10^n} \frac{|\sin(nx)|}{|\sin(x)|} \le \sum_{n=1}^{\infty} \frac{n}{10^n}
$$
Now in singular points $\lim_{x \to k \pi} \; g(x) = \sum_{n \in \mathbb{N}} \frac{1}{10^n} \frac{\sin(nx)}{\sin(x)}$ . The interesting part here is just $\frac{\sin(n x)}{\sin(x)}$. Here is pretty handy formula that $\sin(x + y) = \sin(x)\cos(y) + \cos(x)\sin(y)$. You can now rewrite the limit:
$$
\lim_{x \to k \pi} 10^{-n} \frac{\sin(xn)}{\sin(x)} = \lim_{\varepsilon \to 0} 10^{-n}\frac{\sin(n\varepsilon + n k \pi)}{\sin(\varepsilon + k \pi)} = \lim_{\varepsilon \to 0} 10^{-n}\frac{\sin(n\varepsilon)(-1)^{nk} + 0}{\sin(\varepsilon) (-1)^k + 0} = 10^{-n} (-1)^{-n} n
$$
Now you can see that it is continous on whole $\mathbb{R}$.<br>
Continuity: $\lim_{x \to x'} g(x) = g(x')$. 10^{-n} does not affect periodicity of this function. To prove periodicity, you just have to prove that there exists such $\exists T \in \mathbb{R}\forall t \in \mathbb{R}: g(t + T) = g(t)$. There is such $T$ and it is equal to $2 \pi$. Just use that trick that is above in section where I tried to estimate limit in $k \pi$. If you look at the function you can see that the function is even. So you dont have to calculate odd coeficients in fourier series.
$$
a_m = \frac{2}{2 \pi} \int_{-\pi}^{\pi} \sum_{n \in \mathbb{N}} 10^{-n} \frac{\sin(nx)}{\sin(x)} \cos\bigg(m \frac{2 \pi x}{2\pi}\bigg) \mathrm{d}x = \frac{2}{\pi} \sum_{n \in \mathbb{N}} \int_{0}^{\pi} 10^{-n} \frac{\sin(nx)}{\sin(x)} \cos(m x) \mathrm{d}x
$$</p>
|
72,669 | <p>I encountered this site today <a href="https://code.google.com/p/google-styleguide/">https://code.google.com/p/google-styleguide/</a> regarding the programming style in some languages. What would be best programming practices in Mathematica, for small and large projects ?</p>
| Albert Retey | 169 | <p>I think this is a very relevant question as I think it is agreed standard that having "a" coding styleguide for every project where several people write code is a very good (inevitable?) thing. It also seems to be agreed that it is more important to have a styleguide/standard than how excatly that looks like. I also am convinced that especially for Mathematica there are many details which <strong>should</strong> be handled differently for different kinds of projects and teams.</p>
<p>Thus instead of giving just an example of another style convention I think it makes more sense to write up a list of things that such a guidline could/should address. It would then be a second step to fill these entries with content (or probably avoid some) and probably every team/project wants to have their own details. I would prefer to not fill in specific suggestions for each entry here (too much danger of nonagreement), if people think it would make sense to work on a "mathematica stack exchange users" suggestion there is the other wiki answer from Szabolcs which could be used for that. Of course such a list will never be complete, and for some entries it might be open to debate whether they are relevant at all. I made this list a community wiki and invite everyone to contribute. My suggestion is to not delete entries which one thinks are not relevant but only give some pro/con arguments for them.</p>
<h1>Use of Tools</h1>
<p>It might make sense to make requirements about which tools to use or not use, there a plenty of possibilities to write, develop, document and test mathematica code. It certainly is good to have a convention about that. Possible decisions include:</p>
<ul>
<li>use of frontend, workbench, text-editors, other IDEs (e.g. the Mathematica IDEAS plughin) for code development</li>
<li>use of internal or external tools to write/run tests</li>
<li>use of version control system and which</li>
<li>use of external tools for e.g. documentation</li>
</ul>
<p>of course not all of these are independent, it is known that notebooks are not working welll together with version control systems, so making use of the latter might influence the decision about whether to use the frontend (or more precisely notebook files for code) or not...</p>
<h1>File/Code Organisation</h1>
<h2>Use of File Formats</h2>
<ul>
<li>use of notebooks or packages for source code</li>
<li>use of notebooks or other formats for documentation</li>
<li>file formats for data that is relevant for the project (e.g. csv vs. excel)</li>
</ul>
<h2>Organization of Project/Source-Code Directory</h2>
<ul>
<li>define directory layout and which content should go where</li>
<li>modularisation of code:</li>
<li>how much content per file: one function/symbol definition per file,</li>
<li>how many lines are typically acceptable per function, per file,...</li>
<li>under which conditions are exceptions from the above acceptable?</li>
<li>use of extra directories vs. just extra package files for subpackages</li>
<li>use and naming of public/private contexts for subpackages</li>
<li>use of <code>Protect</code> and other Attributes for symbols.</li>
</ul>
<h1>Naming Conventions</h1>
<h2>Directory/File Names</h2>
<ul>
<li>require restrictions so that package files can be loaded with <code>Needs</code></li>
<li>uppercase/camelcase/... conventions for directories and filenames</li>
<li>use of "-","_", " ",... in (non-package) filenames</li>
<li>use of file extensions, upper-/lower-case</li>
</ul>
<h2>Symbol Names</h2>
<ul>
<li>upper vs. lower CamelCase, allow/suggest just lower case</li>
<li>allow non-ascii characters in symbol names or not? if yes, restrict to subset like e.g. greek letters?</li>
<li>make naming depend on symbol purpose and content? If yes:</li>
<li>use verbs for symbols used as functions, nouns for symbols used as variables</li>
<li>use of singular vs. plural for lists (<code>number[[idx]]</code> vs. <code>numbers[[idx]]</code>), or other conventions as <code>numberArray[[x]]</code></li>
<li>conventions for e.g. variables used as loop counters, flags, ...</li>
<li>use of mathematica like <code>xxxQ</code> functions vs. <code>isXxx</code> as used in many other languages</li>
<li>use a leading <code>$</code> to indicate use of a global variable.</li>
<li>all uppercase names for constant (wide use in other languages, but does anyone use that in Mathematica?)</li>
<li>allow single letter symbol names or not</li>
</ul>
<h2>Option Names</h2>
<p>all of the conventions made for symbol names need to be made here, not necessary with the same outcome. Additionally:</p>
<ul>
<li>use of strings vs. symbols for option names</li>
</ul>
<h1>Documentation</h1>
<ul>
<li>prefer inline documentation with <code>(**)</code> or extra text cells/lines before/after relevant (function) definitions</li>
<li>require usage messages, probably at least stubs for auto completion</li>
<li>have more detailed explanation in extra files (e.g. mathematical background, preliminary experiments etc.)</li>
</ul>
<h1>Code Layout</h1>
<h2>Use of Shortcuts, Parentheses and Such</h2>
<p>Mathematica code could theoretically be written in <code>FullForm</code> and a team with a strong lisp background might actually prefer that. But it is full of shortcuts and many of them help to make code more readable, but with exagerated use of shortcuts Mathematica code can look like perl oneliner contest examples which would make good comic curse strings. It certainly makes sense to give some guidelines about use of such shortcuts:</p>
<ul>
<li>avoid or prefer shortcuts in general?</li>
<li>white- and blacklists for shortcuts</li>
<li>define conditions under which shortcuts are to be used. (e.g. I often use <code>/@</code> when the resulting expression fits in a line and no additional parenthesis are required but otherwise I prefer an explicit <code>Map</code> with my standard convention for indenting and linebreaks).</li>
<li>it often makes sense to write parentheses even when they are not strictly necessary, so it might be relevant to define when paretheses are allowed/required/forbidden or to be replaced by code which doesn't need them (e.g. <code>()&</code> vs. <code>Function[]</code>).</li>
</ul>
<h2>Line Breaks and Indenting</h2>
<ul>
<li>where to put line breaks</li>
<li>for function definitions put linebreak after <code>:=</code> or not</li>
<li>extra linebreak before closing <code>]</code> and <code>}</code> or not</li>
<li>where to put spaces, where not</li>
<li>after <code>,</code> in list of arguments</li>
<li>inbetween operators like <code>+</code>, <code>-</code>, <code>=</code></li>
<li>use standard form cells with automatic indentation or input form cells / pure text with manual indentation</li>
<li>how much indentation</li>
<li>use tabs or spaces for indentation</li>
</ul>
<h1>Constructs Preference/Shunning</h1>
<p>Mathematica is a very "rich" language and there are literally hundreds of ways to achieve the same thing. It might make sense to require certain standard solutions or preferences of certain constructs to help team members to easier understand other members code, e.g.:</p>
<ul>
<li>looping constructs: e.g. favour <code>Do</code> vs. <code>For</code>, favour non-indexing constructs like <code>Map</code> and <code>Scan</code> vs. their indexing counterparts <code>Table</code> / <code>Do</code></li>
<li>preferences of "paradigms" e.g. pattern matching vs. functional vs. procedural styles. e.g.: <code>Replace[result,$Failed:>(Message[...];Throw[...])</code> vs. <code>showMessageIfFailed[result];</code> vs. <code>If[result===$Failed,Message[...]]</code></li>
<li>use of pure functions (many of them nested are hard to read/understand)</li>
<li><code>f=Function[x,x^2]</code> vs. <code>f=#^2&</code> vs. <code>f[x]:=x^2</code></li>
<li>restrict use of symbols to those available to certain Mathematica versions.</li>
<li>object/data representation: <code>Association</code>, <code>Dataset</code>, list of rules (and again: symbol or string keys?), matrix/list with positional meaning, custom head denoting an object, <code>ManagedLibraryExpression</code></li>
</ul>
|
72,669 | <p>I encountered this site today <a href="https://code.google.com/p/google-styleguide/">https://code.google.com/p/google-styleguide/</a> regarding the programming style in some languages. What would be best programming practices in Mathematica, for small and large projects ?</p>
| Shredderroy | 9,257 | <p>In the excellent responses above, I find that one of my favourite guidelines is missing. It concerns the case when one has to apply one function after another.</p>
<p>Instead of writing</p>
<pre><code>f[g[h[k[q[x]]]]]
</code></pre>
<p>write either</p>
<pre><code>x // (q /* k /* h /* g /* f)
</code></pre>
<p>or</p>
<pre><code>RightComposition[q, k, h, g, f][x]
</code></pre>
<p>I prefer this method because it is reminiscent of the pipeline operator in F#. And triple-clicking has never been a good friend of mine.</p>
|
980,941 | <p>How can I calculate $1+(1+2)+(1+2+3)+\cdots+(1+2+3+\cdots+n)$? I know that $1+2+\cdots+n=\dfrac{n+1}{2}\dot\ n$. But what should I do next?</p>
| Hypergeometricx | 168,053 | <p>$$\begin{align}
&1+(1+2)+(1+2+3)+\cdots+(1+2+3+\cdots+n)\\
&=n\cdot 1+(n-2)\cdot 2+(n-3)\cdot 3+\cdots +1\cdot n\\
&=\sum_{r=1}^n(n+1-r)r\\
&=\sum_{r=1}^n {n+1-r\choose 1}{r\choose 1}\\
&={n+2\choose 1+2}\\
&={n+2\choose 3}\\
&=\frac16 n(n+1)(n+2)
\end{align}$$</p>
|
479,551 | <p>A store carries three types of donuts: Strawberry, Chocolate and Glazed</p>
<p>Suppose you bought $4$ of each kind and in addition, you have the option to apply sprinkles on your donuts. How many ways are there to eat the donuts if you never eat two donuts in a row that both have sprinkles? </p>
<p>The idea I have is to apply inclusion-exclusion, but I am not sure where to start or if inclusion-exclusion is the best route to take. Any hints would be great. </p>
| ShreevatsaR | 205 | <p>I'll put hints below; ask me if you need help with any step.</p>
<p>First, count the number of ways of ordering the $12$ donuts: which $4$ are strawberry, which $4$ are chocolate, and which $4$ are glazed. This is a straightforward counting problem.</p>
<p>Second, count the number of ways of sprinkling (or not sprinkling) $12$ donuts, such that no two consecutive ones are sprinkled. This <em>could</em> be solved with the inclusion-exclusion principle I guess, though the simpler way I see to solve is to write a recurrence for the number of such ways, and evaluate it at $12$. Anyway, count this somehow.</p>
<p>Finally, the answer is the product of the two numbers, as they are orthogonal: you can think of the eating-order choice as that of first ordering the twelve donuts, and then sprinkling them.</p>
|
1,027,807 | <p>So I have this question that looks like</p>
<p>$$ \frac{x^3 + 3x^2 - x - 8}{x^2 + x - 6} $$</p>
<p>and first I got the partial fraction so getting </p>
<p>$$ x + 2 + \frac{3x + 4}{x^2 + x -6} $$</p>
<p>but now I'm trying to integrate it and I cannot remember for the life of me how I should integrate the fraction on the end. Please help.</p>
| John | 7,163 | <p>You can factor the denominator of the last term and decompose again:</p>
<p>$$\frac{3x+4}{x^2+x-6} = \frac{3x+4}{(x+3)(x-2)} = \frac{1}{x+3} + \frac{2}{x-2}.$$</p>
<p>Can you take it from there?</p>
|
3,669,080 | <p>I would love to get some insight on how to solve <span class="math-container">$\int_0^{\frac\pi4}\log(1+\tan x)\,\mathrm dx$</span> using Leibniz rule of integration. I know it can be solved using the property<span class="math-container">$$\int_a^bf(x)\,\mathrm dx=\int_a^bf((a+b)-x)\,\mathrm dx,$$</span>but I find that method rather rigorous and am in search of a shorter elegant method.</p>
<p>Attached below is my attempt at doing it. I am unable to find <span class="math-container">$f(0)$</span> thus also the arbitrary constant.</p>
<p><img src="https://i.stack.imgur.com/f3osw.jpg"></p>
<p>My answer is <span class="math-container">$\dfrac π4\log(2)$</span>, which is wrong. Please correct my method and show me the right way to do it.</p>
<p>p.s please excuse my code I am new to Mathjax.</p>
| Eeyore Ho | 747,376 | <p>You should let <span class="math-container">$ I(t)=\int_{0}^{\frac{\pi}{4}}\ln(1+t \tan x) dx,I=I(1),I(0)=0 $</span>
<span class="math-container">$$ I'(t)=\int_{0}^{\frac{\pi}{4}}\frac{\tan x}{1+t \tan x} dx=\int_{0}^{\frac{\pi}{4}}\frac{\sin x}{\cos x+t \sin x} dx=\int_{0}^{\frac{\pi}{4}}\left( \frac{t}{1+t^2}+\frac{-1}{1+t^2} \frac{-\sin x+t \cos x}{\cos x+t \sin x} \right) dx=\frac{\pi}{4} \frac{t}{1+t^2}+\frac{\ln 2}{2} \frac{1}{1+t^2} -\frac{\ln(1+t)}{1+t^2}$$</span></p>
<p>We note
<span class="math-container">$$ I=\int_{0}^{\frac{\pi}{4}}\ln(1+\tan x) dx=\int_{0}^{1}\frac{\ln(1+t)}{1+t^2} dt (\tan x =t) $$</span>
Thus
<span class="math-container">$$ I=\int_{0}^{1}I'(t) dt =\int_0^1 \frac{\pi}{4} \frac{t}{1+t^2} dt+\int_0^1 \frac{\ln 2}{2} \frac{1}{1+t^2} dt -\int_0^1 \frac{\ln(1+t)}{1+t^2} dt $$</span>
<span class="math-container">$$ I=\frac{\pi \ln 2}{4}-I $$</span>
<span class="math-container">$$ I=\frac{\pi \ln 2}{8} $$</span></p>
|
487,084 | <p>I need to know if every group whose order is a power of a prime $p$ contains an element of order $p$? Should I proceed by picking an element $g$ of the group and proving that there is an element in $\langle g \rangle$ that has order $p$?</p>
| N. S. | 9,176 | <p>This follows immediately from Lagrange theorem, you don't need any stronger result.</p>
<p>If the order of the group is $p^k$ with $k \neq 0$, then by Lagrange Theorem, the order of any element divides $p^k$.</p>
<p>Pick some $x \in G, x \neq e$. Then the order of $x$ is $p^m$ with $1 \leq m \leq k$. Let</p>
<p>$$y:=x^{p^{m-1}} \,.$$</p>
<p>Prove that the order of $y$ is $p$. </p>
|
3,256,767 | <p>So I'm trying to understand a solution made by my teacher for a question that asks me to determine whether the following is true. I'm having trouble understanding where some values in the steps are coming from.</p>
<p>Like for the first part, I don't really get where n≥5 came from. My guess is getting 16n^2 + 25 to equal 16n^2 + n^2 by substituting n with 5. But I was wondering why 25 turned into n^2 in the first place?</p>
<p>I also have no idea where k = 5 came from.</p>
<p>For the second part of the solution, I'm also having similar struggles. Why did 16n^2 turn into 15n^2 + n^2? I'm also not sure where n≥41 and k=41 came from. I would really appreciate some clarification because I'm having trouble understanding this unit. </p>
<p><a href="https://i.stack.imgur.com/97ORS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/97ORS.png" alt="enter image description here"></a></p>
| DonAntonio | 31,254 | <p>Simply define <span class="math-container">$\;T:\Bbb R^4\to\Bbb R^3\;$</span> by </p>
<p><span class="math-container">$$T(1,1,1,1)=(1,2,1)\,,\,\,T(1,0,1,0):=(2,1,0)$$</span></p>
<p>and then complete <span class="math-container">$\;\{v_1=(1,1,1,1)\,,\,\,v_2=(1,0,1,0)\}\;$</span> to a basis of <span class="math-container">$\;\Bbb R^4\;$</span> and on the two other vectors define <span class="math-container">$\;T\;$</span> to be zero. For example, take <span class="math-container">$\;\{v_3=(0,1,0,0)\,,\,\,v_4=(0,0,1,0)\}\;$</span> . Check now that <span class="math-container">$\;A:=\{v_1,v_2,v_3,v_4\}\;$</span> is a basis of <span class="math-container">$\;\Bbb R^4\;$</span> , and (again), define:</p>
<p><span class="math-container">$$Tv_1=(1,2,1)\,,\,Tv_2=(2,1,0)\,,\,\,Tv_3=Tv_4=(0,0,0)$$</span></p>
<p>and extend the above definition by linearity, thus obtaining a linear map. And now represent this map wrt the above basis in <span class="math-container">$\;\Bbb R^4\;$</span> and say the standard one <span class="math-container">$\;B\;$</span> in <span class="math-container">$\;\Bbb R^3\;$</span>, getting :</p>
<p><span class="math-container">$$[T]_A^B=\begin{pmatrix}
1&2&0&0\\
2&1&0&0\\
1&0&0&0\end{pmatrix}$$</span></p>
|
1,840,924 | <p>My teacher showed this proof using the dominated convergence theorem or Fourier analysis, but I wonder if there is an elementary proof of this problem. My teacher said it is difficult to solve this in an elementary way, but do you know how?</p>
| stewbasic | 197,161 | <p>Here is a proof using measure theory (so unfortunately not elementary).</p>
<p>Fix the subsequence $n_1<n_2<\ldots$ and let $X$ be the set of $x\in\mathbb R$ for which $\sin(n_kx)$ converges. For any $\epsilon\in(0,1)$ let
$$\begin{eqnarray*}
Y_k^\epsilon&=&\{x\in[0,4\pi]\mid|\sin(n_kx)-\sin(n_{k+1}x)|<2\epsilon^2\},\\
X_k^\epsilon&=&\bigcap_{l\geq k}Y_k^\epsilon.
\end{eqnarray*}$$
Note that each $Y_k^\epsilon$ is open and therefore measurable, so $X_k^\epsilon$ is also measurable. Moreover
$$
|\sin(n_kx)-\sin(n_{k+1}x)|=2|\cos(ax)\sin(bx)|
$$
where $a=\frac{n_k+n_{k+1}}2$, $b=\frac{n_k-n_{k+1}}2$. Hence $Y_k^\epsilon\subseteq A_k^\epsilon\cup B_k^\epsilon$ where
$$\begin{eqnarray*}
A_k^\epsilon&=&\{x\in[0,4\pi]\mid|\cos(ax)|<\epsilon\},\\
B_k^\epsilon&=&\{x\in[0,4\pi]\mid|\sin(bx)|<\epsilon\}.
\end{eqnarray*}$$
We have
$$\begin{eqnarray*}
\mu(A_k^\epsilon)&=&\frac1a\mu(\{u\in[0,4a\pi]\mid|\cos(u)|<\epsilon\})\\
&=&2\mu(\{u\in[0,2\pi]\mid|\cos(u)|<\epsilon\})\\
&=&2\mu\left((\pi/2-\delta,\pi/2+\delta)\cup(3\pi/2-\delta,3\pi/2+\delta)\right)\\
&=&8\delta
\end{eqnarray*}$$
where $\delta=\sin^{-1}(\epsilon)$. A similar calculation applies to $B_k^\epsilon$. Hence
$$
\mu(X_k^\epsilon)\leq\mu(Y_k^\epsilon)\leq16\sin^{-1}(\epsilon).
$$
Note that $X_1^\epsilon\subseteq X_2^\epsilon\subseteq\ldots$, so
$$
\mu\left(\bigcup_kX_k^\epsilon\right)\leq16\sin^{-1}(\epsilon).
$$
Finally
$$
X\cap[0,4\pi]\subseteq\bigcup_kX_k^\epsilon.
$$
Since $\epsilon\in(0,1)$ was arbitrary and $X$ is $2\pi$-periodic, $X$ cannot contain any set of positive measure. (I'd like to say that $\mu(X)=0$, but it may not be measurable). In particular $X$ cannot equal $\mathbb R$.</p>
<hr>
<p>I tried to find a proof which shows that $X$ is countable, but this is not always true. For example, let $n_k=k!$. Let $A$ be the (uncountable) set of sequences with values in $\{0,1\}$, and define $f:A\rightarrow\mathbb R$ by
$$
f(a)=2\pi\sum_{k=1}^\infty\frac{(-1)^{a_k}}{k!}.
$$
Note that for $k\geq1$ we have
$$
\left|\sum_{l>k}\frac{(-1)^{a_l}}{l!}\right|\leq
\frac2{k+1}\sum_{l>k}\frac{1}{l!2^{l-k}}=\frac2{(k+1)!}\leq\frac1{k!}.
$$
This implies $f$ is injective, so $f(A)$ is uncountable. Moreover
$$
|\sin(n_kf(a))|=\left|\sin\left(2\pi k!\sum_{l>k}\frac{(-1)^{a_l}}{l!}\right)\right|
\leq\frac{4\pi}{k+1}
$$
so $\sin(n_kf(a))\rightarrow0$ as $k\rightarrow\infty$. Thus $f(A)\subseteq X$.</p>
|
1,840,924 | <p>My teacher showed this proof using the dominated convergence theorem or Fourier analysis, but I wonder if there is an elementary proof of this problem. My teacher said it is difficult to solve this in an elementary way, but do you know how?</p>
| stewbasic | 197,161 | <p>Here is an elementary proof inspired by <a href="https://gowers.wordpress.com/2008/07/25/what-is-deep-mathematics/" rel="nofollow noreferrer">this proof</a>.</p>
<p>Fix the subsequence $1\leq n_1<n_2<\ldots$. I will inductively construct sequences $a_1\leq a_2\leq\ldots$ and $b_1\geq b_2\geq\ldots$ such that:</p>
<ul>
<li>$a_i<b_i$</li>
<li>$\sin(n_k[a_i,b_i])\subseteq(-1)^i[1/2,1]$ for some $n_k\geq i$.</li>
</ul>
<p>Let $a_1=(-\pi/2)/n_1$ and $b_1=(-\pi/6)/n_1$. Suppose $[a_i,b_i]$ have been constructed. Pick $n_k$ larger than $i$ and larger than $3\pi/(b_i-a_i)$. Let $m=\lfloor n_kb_i/(2\pi)-1/4\rfloor$. Note that
$$
2\pi m+\pi/2\leq n_kb_i,
$$
$$
2\pi m-\pi/2\geq n_kb_i-3\pi\geq n_ka_i.
$$
Thus we can take
$$
a_{i+1}=\left(2\pi m+\pi/6\right)/n_k,\,
b_{i+1}=\left(2\pi m+\pi/2\right)/n_k
$$
if $i+1$ is even, and
$$
a_{i+1}=\left(2\pi m-\pi/2\right)/n_k,\,
b_{i+1}=\left(2\pi m-\pi/6\right)/n_k
$$
if $i+1$ is odd. This concludes the inductive construction.</p>
<p>Now $a_i$ is a bounded monotonic sequence, so it converges to some $a$. For each $i$ we have $a_i\leq a\leq b_i$, so
$$
\sin(n_ka)\in(-1)^i[1/2,1]
$$
for some $n_k\geq i$. This shows $\sin(n_ka)$ cannot converge as $k\to\infty$.</p>
|
976,881 | <p>Simple and quick question. These two have to do with 90 degree angles.</p>
<p>This is the picture of the two words I have.</p>
<blockquote>
<p>Perpendicular is strictly restricted to lines.</p>
</blockquote>
<ul>
<li><p>"Line A and B are perpendicular to each other."</p></li>
<li><p>"v=(1, 1) and w=(-1, 1) -> cv and dw are perpendicular towards each other."</p></li>
</ul>
<blockquote>
<p>Orthogonal is restricted to matrices.</p>
</blockquote>
<pre><code>1 1 1 1
1 -1 1 -1
1 1 -1 -1
1 -1 -1 1
</code></pre>
<ul>
<li>The above matrix is an orthogonal matrix.</li>
</ul>
<p>So would be following statement be correct: A orthogonal matrix has vectors which are perpendicular towards each other?</p>
| Karthik Upadhya | 98,696 | <p>Yep. Perpendicular to each other in a higher dimensional space ( 4 dimensions in your example ).</p>
<p>In fact, you can write out the angle $\theta$ between two vectors $x,y$ as</p>
<p>$$cos(\theta) = \frac{x^T y}{\sqrt{x^T x} \sqrt{y^T y}} $$</p>
|
638,566 | <p>I came across a question yesterday about combinations, and I wanted to know what the correct answer was. The question states as follows:</p>
<p>There are 8 spaces that are alternately black and white. There is one king, one queen, two identical rooks, two identical bishops, and two identical knights. The king needs to be surrounded by the two rooks. Then, the two bishops can be put on any of the remaining spaces as long as one is on a black space and the other is on a white space. Finally, the queen and the knights can be placed anywhere in the remaining spaces in relation to the other pieces. How many possible arrangements are there considering these conditions?</p>
| Ross Millikan | 1,827 | <p>Hint: how many ways are there to select the leftmost square of the RKR series (I am taking surrounded by as one rook is immediately next to the king on each side)? Now place the bishops-how many ways to do that? Now place the queen-how many ways? The knights are now fixed.</p>
|
2,203,907 | <p>I am trying to show that:
$$\mathcal{L}\{erfc( \frac{k}{2\sqrt t})\} = \frac{1}{s}e^{-k\sqrt s}$$
The hint given for this question is the Laplace Transform of an integral (from convolution):
$$\mathcal{L}\{\int_{0}^{t}f(u) \, du \} = \frac{1}{s} \mathcal{L}\{f(u)\} \tag{1}$$</p>
<p>I have read in a different text that it is sufficient to show that:
$$\mathcal{L}\{\frac{d}{dt} erfc(\frac{k}{2 \sqrt t})\} = e^{-k \sqrt s} \tag{2} $$ </p>
<p>Can somebody explain to me how $(2)$ relates to $(1)$?
As I see it, $(2)$ changes the integral to $\frac{1}{s}$ but then why are we required to differentiate $f(u) = erfc(\frac{k}{2 \sqrt t})$?</p>
| Fabian | 7,266 | <p>Let us denote with $$F(t) = \mathop{\rm erfc}(k/2\sqrt{t})$$
and also $$f(t)= \frac{d}{dt}F(t).$$</p>
<p>Now let us look at the expression
$$\mathcal{L}\{F(t) \}=\mathcal{L}\{\int_{0}^{t}f(u) \, du \} = \frac{1}{s} \mathcal{L}\{f(t)\} \tag{1}.$$
In order to calculate $\mathcal{L}\{F(t) \}$, you need to evaluate
$$\frac{1}{s} \mathcal{L}\{f(t)\} = \frac{1}{s} \mathcal{L}\{\frac{d}{dt} \mathop{\rm erfc}(k/2\sqrt{t})\}. $$</p>
<p>Given the fact that
$$f(t) = \frac{k e^{-\frac{k^2}{4 t}}}{2 \sqrt{\pi } t^{3/2}}$$
you won't have any problems solving your problem.</p>
|
3,416,019 | <p>If a function <span class="math-container">$f:A\longrightarrow B\times C, f(x)=(g(x),h(x))$</span> is injective, does it imply that <span class="math-container">$h(x)$</span> and <span class="math-container">$g(x)$</span> are also injective? I think this is straight forward but just want to confirm. </p>
<p>Suppose that <span class="math-container">$g,h$</span> are not injective. This means for some <span class="math-container">$x,y,x\neq y\in A$</span>, we have <span class="math-container">$g(x)=g(y)$</span> and <span class="math-container">$h(x)=h(y)$</span>, that is <span class="math-container">$(g(x),h(x))=(g(y),h(y))\Longrightarrow f(x)=f(y)$</span>, a contradiction since <span class="math-container">$f$</span> is injective.</p>
<p>Is this correct?</p>
| John Hughes | 114,036 | <ol>
<li><p>Your question should be about <span class="math-container">$g$</span> and <span class="math-container">$h$</span>, not <span class="math-container">$f$</span> and <span class="math-container">$g$</span>. </p></li>
<li><p>Your argument is mistaken. For instance, <span class="math-container">$h(x) = 1, g(x) = x$</span> makes <span class="math-container">$f$</span> injective, but <span class="math-container">$h$</span> is certainly not. See whether, looking at this counterexample, you can figure out what was wrong with your argument. </p></li>
</ol>
|
262,003 | <p>Is there a positive integer $N$, besides 1 and 2, such that there is a permutation $a_1=1,a_2,a_3,\dots,a_N$ of $1,2,3,\dots,N$ in which for each $k>1$, $a_k=a_{k-1}\div k,a_k=a_{k-1}-k,a_k=a_{k-1}+k,\textrm{or }a_k=a_{k-1}\times k$?</p>
| Gerhard Paseman | 3,402 | <p>It may be of interest to see how far it can be continued, but it cannot be completed. Suppose we ignore the restriction that the first term must be 1. The last term must be the previous term times or divided by N. So the last two terms are 1,N or N,1. Now unless N is 2, there is no way to use N-1 to generate the term N and maintain a permutation, so the second to last term must be 1, from which it follows the previous term must be N-1. If N is 3, this gives the last remaining permutation 2,1,3 satisfying the relaxation. For if N is greater, there is no way to use N-2 to make N-1 without using 1, and so the permutation property is broken.</p>
<p>As evidenced in the comments, one can use plus and minus as alternating operations to generate long stretches of the permutation having the properties desired for many k.</p>
<p>Gerhard "Recommends Asking Another, Separate Question" Paseman, 2017.02.11.</p>
|
2,113,596 | <p>Questions with likely obvious answers, but I don't have the required intuition to go with the flow.</p>
<p>Consider $a+be^x + ce^{-x} = 0$. To solve it for the constants, we can try out different values of $x$ to get a system of $3$ equations and simply use a calculator (given technique). Why are we allowed to stick various values of $x$ into the equation? Is that because the equation is valid for all $x$? I mean the fact(?) that this equation holds for all $x$ allows us to stick any value of $x$ into the equation whose byproduct is the isolation of constants(just a bonus). Does that make sense?</p>
<p>Consider $k_1(1) + k_2(t^2 - 2t) + 5k_3(t - 1)^2 = (k_2 + 5k_3)t^2 +(-k_2 - 10k_3)t + (k_1 + 5k_3) = 0.$ To solve the LHS of the equation for the constants, we can let $(k_2 + 5k_3) = (-k_2 - 10k_3) = (k_1 + 5k_3) = 0$ and solve the equality for $k_i.$ Call this maneuver $X.$ Why are we allowed to do $X$? Is it because $X$ is one of the solutions to $(k_2 + 5k_3)t^2 +(-k_2 - 10k_3)t + (k_1 + 5k_3) = 0$ which allows us to consider $X$? Hopefully, this makes sense. </p>
| Community | -1 | <p>Let $F'=F+1$ denote the new temperature in Fahrenheit where $F$ is the old temperature. Let $C'$ and $C $ similarly represent the new and old temperature in Celsius. </p>
<p>Then we get $$C' =\frac {5}{9}((F+1)-32) =\frac {5}{9}(F-32) +\frac{5}{9} = C +\frac {5}{9} $$ Hope it helps. </p>
|
2,801,406 | <blockquote>
<p>Find the coordinates of the points where the line tangent to the curve $$x^2-2xy+2y^2=4$$ is parallel to the $x$-axis, given that $$\frac{dy}{dx}=\frac{y-x}{2y-x}$$</p>
</blockquote>
<p>By letting $dy/dx = 0$ I get $y=x$ which is no help... what do I do?</p>
<p>Thanks</p>
| tien lee | 557,074 | <p>$$x^2-2xy+2y^2=4$$
$$\frac{d}{dx}(x^2-2xy+2y^2)=\frac{d}{dy}(4)$$
$$2x-2(y+x\frac{dy}{dx})+4y\frac{dy}{dx}=0$$
$$2x-2y-2x\frac{dy}{dx}+4y\frac{dy}{dx}=0$$
$$\frac{dy}{dx}=\frac{y-x}{2y-x}$$
Now equate $\frac{dy}{dx}=0$
$$\frac{y-x}{2y-x}=0$$
$$x=y,x=2y$$
Solve by substitution, I substituted $y=x$ in $x^2-2xy+2y^2=4$ and got
$$x^2-2x^2+2x^2=4$$
$$x=-2,2$$
Now substitute each $x$ value in $x^2-2xy+2y^2=4$ and we get$$y=-2,2$$</p>
<p>The points are $(-2,-2),(2,2)$</p>
|
6,562 | <p>I want to make some button shaped graphics that would essentially be a rectangular shape with curved edges. In the example below I have used <code>Polygon</code> rather than <code>Rectangle</code> so as to take advantage of <code>VertexColors</code> and have a gradient fill. The code below illustrates the sort of thing I want in so far as the <code>Frame</code> with <code>RoundingRadius</code> shows where I want the boundaries of the <code>Graphic</code> to be cut off (for example).</p>
<pre><code>Framed[Graphics[{
Polygon[{{0, 0}, {1, 0}, {1, 1}, {0, 1}},
VertexColors -> {Red, Red, Blue, Blue}]
},
AspectRatio -> 0.2,
ImagePadding -> 0,
ImageMargins -> 0,
ImageSize -> 200,
PlotRangePadding -> 0],
ContentPadding -> True,
FrameMargins -> 0,
ImageMargins -> 0,
RoundingRadius -> 20]
</code></pre>
<p>I'm thinking there is probably a very straight forward way of accomplishing this. Is there some way to exclude parts of the <code>Graphic</code> that fall outside the <code>Frame</code> from displaying? Any alternative methods would be welcome.</p>
<p><strong>Edit</strong></p>
<p>I had been expecting that this was going to be possible with existing options rather than having to write functions. @Mr.Wizard provided a concise solution from existing built in functionality but I ultimately didn't want a raster solution. @Heike used <code>RegionPlot</code> like the others, but in a way in which the user, i.e. me, could implement it by simply changing a rounding radius parameter, so that makes it a more straight forward solution IMO.</p>
| M.R. | 403 | <p>Vitaliy had a great answer. I guess another way to do this is to simply make the curves using many lines:</p>
<p><img src="https://i.stack.imgur.com/Vv8z8.png" alt="enter image description here"></p>
<p>In the following code, resolution is the number of lines used to make the curve and m is how big the corners are.</p>
<pre><code> resolution = 30;
w = 2;
h = 1;
m = 0.1;
circlePoint[center_, radius_, radian_] := radius {Cos[radian], Sin[radian]} + center;
max = Max[w, h]*m;
pts1 = Sequence @@
Table[circlePoint[{max, max}, max, r], {r, \[Pi],
3 \[Pi]/2, \[Pi]/2/resolution}];
pts2 = Sequence @@
Table[circlePoint[{w - max, max}, max, r], {r, 3 \[Pi]/2,
2 \[Pi], \[Pi]/2/resolution}];
pts3 = Sequence @@
Table[circlePoint[{w - max, h - max}, max, r], {r,
0, \[Pi]/2, \[Pi]/2/resolution}];
pts4 = Sequence @@
Table[circlePoint[{max, h - max}, max,
r], {r, \[Pi]/2, \[Pi], \[Pi]/2/resolution}];
Graphics[{Polygon[{{0, max}, pts1, { max, 0}, {w - max, 0},
pts2, {w, max}, {w, h - max}, pts3, {w - max, h}, {w - max, h },
pts4, {0, h - max}},
VertexColors ->
Join[Table[Red, {2 (resolution + 1) + 4}],
Table[Blue, {2 (resolution + 1) + 4}]]]}]
</code></pre>
|
6,562 | <p>I want to make some button shaped graphics that would essentially be a rectangular shape with curved edges. In the example below I have used <code>Polygon</code> rather than <code>Rectangle</code> so as to take advantage of <code>VertexColors</code> and have a gradient fill. The code below illustrates the sort of thing I want in so far as the <code>Frame</code> with <code>RoundingRadius</code> shows where I want the boundaries of the <code>Graphic</code> to be cut off (for example).</p>
<pre><code>Framed[Graphics[{
Polygon[{{0, 0}, {1, 0}, {1, 1}, {0, 1}},
VertexColors -> {Red, Red, Blue, Blue}]
},
AspectRatio -> 0.2,
ImagePadding -> 0,
ImageMargins -> 0,
ImageSize -> 200,
PlotRangePadding -> 0],
ContentPadding -> True,
FrameMargins -> 0,
ImageMargins -> 0,
RoundingRadius -> 20]
</code></pre>
<p>I'm thinking there is probably a very straight forward way of accomplishing this. Is there some way to exclude parts of the <code>Graphic</code> that fall outside the <code>Frame</code> from displaying? Any alternative methods would be welcome.</p>
<p><strong>Edit</strong></p>
<p>I had been expecting that this was going to be possible with existing options rather than having to write functions. @Mr.Wizard provided a concise solution from existing built in functionality but I ultimately didn't want a raster solution. @Heike used <code>RegionPlot</code> like the others, but in a way in which the user, i.e. me, could implement it by simply changing a rounding radius parameter, so that makes it a more straight forward solution IMO.</p>
| Mr.Wizard | 121 | <p>The raster method I alluded to in a comment was requested.</p>
<pre><code>g1 = Graphics[{
Polygon[{{0, 0}, {3, 0}, {3, 1}, {0, 1}}, VertexColors -> {Red, Red, Blue, Blue}]
}]
g2 = Graphics[{Rectangle[{0, 0}, {3, 1}, RoundingRadius -> 0.5]}]
ImageAdd[g1, g2]
</code></pre>
<p><img src="https://i.stack.imgur.com/Xtd67.png" alt="Mathematica graphics"></p>
|
6,562 | <p>I want to make some button shaped graphics that would essentially be a rectangular shape with curved edges. In the example below I have used <code>Polygon</code> rather than <code>Rectangle</code> so as to take advantage of <code>VertexColors</code> and have a gradient fill. The code below illustrates the sort of thing I want in so far as the <code>Frame</code> with <code>RoundingRadius</code> shows where I want the boundaries of the <code>Graphic</code> to be cut off (for example).</p>
<pre><code>Framed[Graphics[{
Polygon[{{0, 0}, {1, 0}, {1, 1}, {0, 1}},
VertexColors -> {Red, Red, Blue, Blue}]
},
AspectRatio -> 0.2,
ImagePadding -> 0,
ImageMargins -> 0,
ImageSize -> 200,
PlotRangePadding -> 0],
ContentPadding -> True,
FrameMargins -> 0,
ImageMargins -> 0,
RoundingRadius -> 20]
</code></pre>
<p>I'm thinking there is probably a very straight forward way of accomplishing this. Is there some way to exclude parts of the <code>Graphic</code> that fall outside the <code>Frame</code> from displaying? Any alternative methods would be welcome.</p>
<p><strong>Edit</strong></p>
<p>I had been expecting that this was going to be possible with existing options rather than having to write functions. @Mr.Wizard provided a concise solution from existing built in functionality but I ultimately didn't want a raster solution. @Heike used <code>RegionPlot</code> like the others, but in a way in which the user, i.e. me, could implement it by simply changing a rounding radius parameter, so that makes it a more straight forward solution IMO.</p>
| Carl Woll | 45,431 | <p>I think you can achieve what you want using <a href="http://reference.wolfram.com/language/ref/Texture.html" rel="nofollow noreferrer"><code>Texture</code></a>. Basically, create a <a href="http://reference.wolfram.com/language/ref/FilledCurve.html" rel="nofollow noreferrer"><code>FilledCurve</code></a> version of a rounded rectangle, and then use <a href="http://reference.wolfram.com/language/ref/%5CVertexTextureCoordinates.html" rel="nofollow noreferrer"><code>VertexTextureCoordinates</code></a> to map a texture onto the <a href="http://reference.wolfram.com/language/ref/FilledCurve.html" rel="nofollow noreferrer"><code>FilledCurve</code></a>. First, here is a function to generate corners:</p>
<pre><code>corner[{x_,y_}, r_, {a1_,a2_}] := With[{phi = a2-a1},
BezierCurve @ AffineTransform[{{{Cos[a1], -Sin[a1]}, {Sin[a1], Cos[a1]}}, {x,y}}][
r * {
{1, 4/3 Tan[phi/4]},
{Cos[phi]+4/3 Tan[phi/4] Sin[phi], Sin[phi]-4/3 Tan[phi/4] Cos[phi]},
{Cos[phi], Sin[phi]}
}
]
]
</code></pre>
<p>Note that the first coordinate (<code>{1, 0}</code>) of the <a href="http://reference.wolfram.com/language/ref/BezierCurve.html" rel="nofollow noreferrer"><code>BezierCurve</code></a> has been dropped because of the way <a href="http://reference.wolfram.com/language/ref/FilledCurve.html" rel="nofollow noreferrer"><code>FilledCurve</code></a> works. With this corner function, we can create a <code>roundedRectangle</code> function that creates the desired <a href="http://reference.wolfram.com/language/ref/FilledCurve.html" rel="nofollow noreferrer"><code>FilledCurve</code></a>:</p>
<pre><code>Options[roundedRectangle] = {RoundingRadius -> 0};
roundedRectangle[{x1_,y1_}, {x2_,y2_}, OptionsPattern[]] := Module[
{r = OptionValue[RoundingRadius], curve, vertices},
curve={
Line[{{x1+r,y1}, {x2-r,y1}}],
corner[{x2-r,y1+r}, r, {3Pi/2,2Pi}],
Line[{{x2,y1+r}, {x2,y2-r}}],
corner[{x2-r,y2-r}, r, {0,Pi/2}],
Line[{{x2-r,y2}, {x1+r,y2}}],
corner[{x1+r,y2-r}, r, {Pi/2,Pi}],
Line[{{x1,y2-r}, {x1,y1+r}}],
corner[{x1+r,y1+r}, r, {Pi,3Pi/2}]
};
vertices = curve /. {Line[{a__}] :> a, BezierCurve[{a__}] :> a};
FilledCurve[
curve,
VertexTextureCoordinates -> RescalingTransform[{{x1,x2},{y1,y2}}][vertices]
]
]
</code></pre>
<p>Here is an example of a rounded rectangle using the OP's desired gradient fill:</p>
<pre><code>Graphics[{
Texture @ Graphics[
Polygon[{{0,0}, {1,0}, {1,1}, {0,1}}, VertexColors->{Red,Red,Blue,Blue}],
PlotRangePadding->0
],
roundedRectangle[{0,0}, {1,.2}, RoundingRadius->.05]
}]
</code></pre>
<p><a href="https://i.stack.imgur.com/SZZA4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SZZA4.png" alt="enter image description here"></a></p>
<p>And here is a more interesting <a href="http://reference.wolfram.comlanguage/ref/Texture.html" rel="nofollow noreferrer"><code>Texture</code></a>:</p>
<pre><code>Graphics[{
Texture[ExampleData[{"TestImage", "Sailboat"}]],
roundedRectangle[{0,0}, {1,.2}, RoundingRadius->0.05]
}]
</code></pre>
<p><a href="https://i.stack.imgur.com/Q5Pqt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q5Pqt.png" alt="enter image description here"></a></p>
|
1,829,975 | <p>Find three different systems of linear equation whose solutions are $x_1 = 3, x_2 = 0, x_3 = -1$</p>
<p>I'm confused, how exactly can I do this?</p>
| Doug M | 317,162 | <p>come up with an a linear equation that includes the indicated point.</p>
<p>e.g. $x + y + 3z = 0$
Now come up with two more. If you want your solution to be unique you will need to make sure that the planes you have chosen are "linearly independent."</p>
|
134,001 | <p>what is the basic difference between the Discrete Fourier Transform and the Wavelet Transform ? and why does JPEG2000 preferred DWT over DCT or DFT ? </p>
| Christian Blatter | 1,303 | <p>The basic difference is the following: Assume you have a data vector ${\bf x}=(x_1,\ldots, x_N)$ of length $N:=2^n$ that models a function of a real variable $t$ on some finite interval.</p>
<p>In DFT you compute $N$ complex Fourier coefficients of this data vector according to the formula
$$\hat x_j:={1\over N}\sum_{k=0}^{N-1} x_k\ \omega^{-jk}\ , \qquad\omega:=e^{2\pi i/N}\ ,$$
where FFT (fast Fourier transform) allows you to obtain the $\hat x_j$ in $O(N\log N)$ operations, and the same number of operations will lead you back to the original ${\bf x}$. The $\hat x_j$ encode all sorts of "distributed" information about ${\bf x}$, but basically you need all of them to recover ${\bf x}$ with sufficient accuracy. </p>
<p>The basic ingredient in DWT is <em>multiresolution analysis</em>. Here the $N$ wavelet coefficients, say $w_j$, of the given ${\bf x}$ can be computed in only $O(N)$ operations. The main point, however, is that the $w_j$ encode local information about ${\bf x}$ in a way that makes it possible to discard all $w_j$ with absolute value below a given treshold and still being able to reconstruct ${\bf x}$ with acceptable accuracy. That's what makes DWT so useful in MP3 or in image compressing.</p>
<p>This is accomplished in the following way (I'm describing a toy model of the setup): $w_0$ is just the average of the $x_k$; $w_1$ is the difference between the averages of the $x_k$ in the first half and the second half of the data vector, $w_2$ is the difference between the averages of the $x_k$ in the first quarter and the second quarter of the data vector, and $w_3$ is the difference between the averages of the $x_k$ in the third quarter and the fourth quarter of the data vector, and so on. Finally $w_{N-1}$ is the difference between $x_{N-1}$ and $x_N$. (I have omitted certain weight factors.)</p>
<p>The basis functions of DFT are "discretized sine waves" whereas the basis functions of DWT, the socalled <em>wavelets</em>, have very peculiar graphs. But the exact shape of these wavelets plays no rôle in the applications: It is the algebraic structure of the whole setup that is essential.</p>
|
2,391,624 | <p>This question pertains to Mosteller's classic book <em>Fifty Challenging Problems in Probability</em>. Specifically, this in regards to an algebraic operation Mosteller performs in the solution to the first question, entitled "The Sock Drawer."</p>
<p>Mosteller writes:</p>
<blockquote>
<p>Then we require the probability that both are red to be $\frac{1}{2}$, or $$\frac{r}{r+b}*\frac{r-1}{r+b-1}=\frac{1}{2}\text{.}$$
…</p>
<p>Notice that $$\frac{r}{r+b}\gt\frac{r-1}{r+b-1}\text{, for $b > 0$.}$$ Therefore we can create the inequalities $$\left(\frac{r}{r+b}\right)^2 \gt \frac 12 \gt \left(\frac{r-1}{r+b-1}\right)^2$$</p>
</blockquote>
<p>Despite much staring, and not knowing what to Google, I am stumped! In that last step, how does he do that‽</p>
<p>Many thanks,<br>
James</p>
| Donald Splutterwit | 404,247 | <p>\begin{eqnarray*}
\frac{r}{r+b} \color{blue}{\frac{r}{r+b}} > \frac{r}{r+b} \color{blue}{\frac{r-1}{r+b-1}}=\frac{1}{2} =\color{orange}{\frac{r}{r+b}} \frac{r-1}{r+b-1}>\color{orange}{\frac{r-1}{r+b-1}} \frac{r-1}{r+b-1}.
\end{eqnarray*}</p>
|
679,135 | <p><img src="https://i.stack.imgur.com/uJEMx.png" alt="enter image description here">The question is find the δ by The maximum likelihood estimation?
My answer is δ=0 but I am not sure whether it is correct and how tho show its biasness?</p>
| user44197 | 117,158 | <p>Assume both number are nonzero.</p>
<p>The condition is equivalent to
$$
(z_1-z_2)(\bar z_1 - \bar z_2)=(z_1+z_2)(\bar z_1 + \bar z_2)
$$
Multiply out to get
$$
\frac{z_1}{\bar z_1} = -\frac{z_2}{\bar z_2}
$$
or
$$
2 \angle z_1 = \pm\pi + 2 \angle z_2
$$
which gives
$$
\angle z_1 -\angle z_2 = \pm\frac{\pi}{2}
$$</p>
|
1,266,210 | <p>Hello everybody my query is regarding the number of positive integral solution.</p>
<blockquote>
<p>In the sport of cricket, find the number of ways in which a batsman can score $14$ runs in $6$ balls not scoring more than $4$ runs in any ball.</p>
</blockquote>
| Mann | 126,204 | <p>I just realized that this game is cricket, and actually this question has insufficient information ,because in cricket you usually can't make scores of $2$ or $3$ but only $0,1,4,6$ most of time. Well it depends on condition, but i am here going to assume that all outcome are possible from $0\to4$.</p>
<p>So our six variable let them be $x_i$ where $1\leq i \leq 6$ . </p>
<p>Also assuming independency each $0\leq x_i\leq 4$</p>
<p>Since total score we require is </p>
<p>$\sum_{i=1}^{6}x_i=14$</p>
<p>The generator function can be formulated as </p>
<p>$coeff(x^{14})\;\;\; in \;\;\; (x^0+x^1+x^2+x^3+x^4)^6$</p>
<p>$\implies \dfrac{(1-x^5)^6}{(1-x)^{6}}$</p>
<p>$\implies (1-x^5)^6(1-x)^{-6}$</p>
<p>$\implies (1-x^5)^6(1+^{6}C_{1}x+^7C_2x^2+....$</p>
<p>You can take from here, also if $x_i\notin[2,3]$</p>
<p>Simply discard those values in exponents of $x$.</p>
|
1,771,727 | <p>Prove that </p>
<p>$$
\frac{1}{\sqrt2} \sum_{i=1}^m \|s_i - P\|_1
\leq \sum_{i=1}^m \|s_i - P\|_2
\leq \sum_{i=1}^m \|s_i - P\|_1
$$</p>
<p>Where $m$ is a number of points in 2d plane, e.g $s_i = (s_{x_i}, s_{y_i})$ and $p = (p_x, p_y)$</p>
<p>Note that
$$
\begin{split}
\|s_i - p\|_1 &= |s_{x_i} - p_x| + |s_{y_i} - p_y| \\
\|s_i - p\|_2 &= \sqrt{|s_{x_i} - p_x|^2 + |s_{y_i} - p_y|^2}
\end{split}
$$</p>
<p>$P$ is stationary point, $s_i$ is one of $m$ points around $P$</p>
| Bérénice | 317,086 | <p>You can use the exponential form.</p>
<p>$z=2(1+ \sqrt3 i)^{1/2}=2\sqrt2(\frac{1}{2}+ \frac{\sqrt3}{2} i)^{1/2}=2\sqrt2(e^{\frac{i\pi}{3}})^{\frac{1}{2}}=2\sqrt2(e^{\frac{i\pi}{6}})=2\sqrt2(\frac{\sqrt3}{2}+i\frac{1}{2})=\sqrt2\sqrt3+i\sqrt2=\sqrt6+i\sqrt2$</p>
|
1,771,727 | <p>Prove that </p>
<p>$$
\frac{1}{\sqrt2} \sum_{i=1}^m \|s_i - P\|_1
\leq \sum_{i=1}^m \|s_i - P\|_2
\leq \sum_{i=1}^m \|s_i - P\|_1
$$</p>
<p>Where $m$ is a number of points in 2d plane, e.g $s_i = (s_{x_i}, s_{y_i})$ and $p = (p_x, p_y)$</p>
<p>Note that
$$
\begin{split}
\|s_i - p\|_1 &= |s_{x_i} - p_x| + |s_{y_i} - p_y| \\
\|s_i - p\|_2 &= \sqrt{|s_{x_i} - p_x|^2 + |s_{y_i} - p_y|^2}
\end{split}
$$</p>
<p>$P$ is stationary point, $s_i$ is one of $m$ points around $P$</p>
| Emilio Novati | 187,568 | <p>Hint:</p>
<p>As noted in the comment you have $(4+4 \sqrt {3} i)^{1/2}=2(1+\sqrt {3} i)^{1/2}$. Now use the <a href="https://en.wikipedia.org/wiki/Complex_number#Polar_form" rel="nofollow">polar form</a> for the complex number:
$
u=(1+\sqrt {3} i)= re^{i\theta}
$
where:</p>
<p>$r=|u|=\sqrt{1+3}=2$ and $\theta=\arctan( \sqrt{3})= \frac{\pi}{3}$</p>
<p>The (<a href="https://en.wikipedia.org/wiki/Square_root#Principal_square_root_of_a_complex_number" rel="nofollow">principal</a>) square root of u is then $\sqrt{u}=\sqrt{re^{i\theta}}=\sqrt{r}e^{i\theta/2}$</p>
<p>but note that in $\mathbb{C}$ we have also another square root .</p>
|
2,877,578 | <p>Yesterday, I asked the question: <a href="https://math.stackexchange.com/questions/2876740/prove-that-if-a-b-are-closed-then-exists-u-v-open-sets-such-that-u-cap?noredirect=1#comment5938458_2876740">Prove that if $A,B$ are closed then, $ \exists\;U,V$ open sets such that $U\cap V= \emptyset$</a>. </p>
<p>Here is the correct question: prove that if $A,B$ are closed sets in a metric space such that $A\cap B= \emptyset$, there exists $U,V$ open sets such that $A\subset U$, $B\subset V$, and $U\cap V= \emptyset$. </p>
<p>I am thinking of going by contradiction, that is: $\forall\; U,V$ open sets such that $A\subset U$, $B\subset V$, and $U\cap V\neq \emptyset$. </p>
<p>Let $ U,V$ open. Then, $\exists\;r_1,r_2$ such that $B(x,r_1)\subset U$ and $B(x,r_2)\subset V.$ I got stuck here!</p>
<p>I'm thinking of using the properties of $T^4-$space but I can't find out a proof! Any solution or reference related to metric spaces?</p>
| qualcuno | 362,866 | <p>Let </p>
<p>$$
f(X,d) \longrightarrow [0,1] \\
x \mapsto \frac{d(x,A)}{d(x,A) + d(x,B)}
$$</p>
<p>Note that $f$ is continuous, because $d(\cdot,A)$ is, and that it is in fact well defined, because $d(x,A) + d(x,B) = 0$ if and only if $d(x,A) = 0$ and $d(x,B) = 0$ which cannot happen because it would imply $x \in \overline{A} \cap \overline{B} = A \cap B = \emptyset$. Now, </p>
<p>$$
f(x) = 0 \iff d(x,A) = 0\iff x \in \overline{A} = A
$$</p>
<p>and</p>
<p>$$
f(x) = 1 \iff d(x,A) = d(x,A) + d(x,B) \iff d(x,B) = 0 \iff x \in B
$$</p>
<p>Thus, $A = f^{-1}(\{0\})$ and $B = f^{-1}(\{1\})$. Now you can for example take $U = f^{-1}[0,\frac{1}{4})$ and $V = f^{-1}(\frac{3}{4},1]$ as the desired open sets: these are open because the are preimages of open sets of $[0,1]$ and $f$ is continuous, and they are disjoint and contain each closed set by construction.</p>
|
1,679,615 | <p>From what I have read about a transitive relation is that if xRy and yRz are both true then xRz has to be true. </p>
<p>I'm doing some practice problems and I'm a little confused with identifying a transitive relation. </p>
<p>My first example is a "equivalence relation"
$S=\{1,2,3\}$ and $R = \{(1,1),(1,3),(2,2),(2,3),(3,1),(3,2),(3,3)\}$
My Book solutions say that this relations is
Reflexive and Symmetric</p>
<p>My Second example is "partial order"
$S=\{1,2,3\}$ and $R =\{(1,1),(2,3),(1,3)\}$
My Book solutions says is
Antisymmetric and Transitive</p>
<p>I got confused with why is the partial order(second example) Transitive. So what I did is that I applied $1R1$ and $1R3$ so $1R3$($xRy$ and $yRz$ so $xRz$). </p>
<p>I tried to applied this to my first example (equivalence relation). What I did is $1R1$ and $1R3$ so $1R3$ ($xRy$ and $yRz$ so $xRz$).</p>
<p>Can someone explain what I'm missing or doing wrong? What can I do to identify a transitive relation? As you can see on both practice examples both have the same set of relations $1R1$ and $1R3$ so $1R3$($xRy$ and $yRz$ so $xRz$) but one is transitive and the other is not.</p>
| Eric Wofsey | 86,856 | <p>To be transitive, $xRz$ needs to hold <em>whenever</em> you have $x$, $y$, and $z$ such that $xRy$ and $yRz$. You're only testing the case $x=y=1$ and $z=3$, but there might be other cases. For instance, in your first example, you can let $x=1$, $y=3$, and $z=2$, and you find that $1R3$ and $3R2$, but $1R2$ is not true. So the relation is not transitive.</p>
<p>(By the way, you should not call that first relation an "equivalence relation", since an equivalence relation is required to be transitive, in addition to being reflexive and symmetric.)</p>
|
50,680 | <p>Similar question to <a href="https://math.stackexchange.com/questions/50675/logical-squabbles">this</a> question (This question was actually the motivation for the other question, which - if I haven't got it wrong - generalizes this one): I have to prove a proposition of the following form: $$\forall T \subseteq V: \ P(T) \ \& \ Q(T)$$where $T,V$ are sets and $P,Q$ are some properties the set $T$ has to satisfies (which are irrelevant for this question). Now, if no $T$ satisfies $P(T)$, the proposition os obviously false. But if I would rewrite the set of which $T$ is a subset in the following manner $$ \forall T \in \left\{ F \in \mathcal{P}(V) \ | \ P(F) \right\}: \ Q(T)$$ it should be equivalent to the above proposition, because all I did was constrain the possibilities what a set $T$ can be such that my first property $P$ is already included (so if the above proposition would be true, this one, with the $T$'s that already satisfy $P$ would also have to be true and if the above is false, this one ought to be false as well since all I did was move the property $P$ around). But in this form the proposition is vacuously true. What have I done wrong ?</p>
| Brian M. Scott | 12,042 | <p>The proposition $(\forall T \in \{F \in \mathcal P(V):P(F)\})Q(T)$ is <em>not</em> vacuously true: it says that if $P(T)$ is true, then so is $Q(T)$. It's equivalent to $(\forall T \subseteq V)[P(T) \to Q(T)]$, not to $(\forall T \subseteq V)[P(T) \land Q(T)]$</p>
|
50,680 | <p>Similar question to <a href="https://math.stackexchange.com/questions/50675/logical-squabbles">this</a> question (This question was actually the motivation for the other question, which - if I haven't got it wrong - generalizes this one): I have to prove a proposition of the following form: $$\forall T \subseteq V: \ P(T) \ \& \ Q(T)$$where $T,V$ are sets and $P,Q$ are some properties the set $T$ has to satisfies (which are irrelevant for this question). Now, if no $T$ satisfies $P(T)$, the proposition os obviously false. But if I would rewrite the set of which $T$ is a subset in the following manner $$ \forall T \in \left\{ F \in \mathcal{P}(V) \ | \ P(F) \right\}: \ Q(T)$$ it should be equivalent to the above proposition, because all I did was constrain the possibilities what a set $T$ can be such that my first property $P$ is already included (so if the above proposition would be true, this one, with the $T$'s that already satisfy $P$ would also have to be true and if the above is false, this one ought to be false as well since all I did was move the property $P$ around). But in this form the proposition is vacuously true. What have I done wrong ?</p>
| Arturo Magidin | 742 | <p>The reason the kind of "rewriting" you are doing is not in general valid is that the universal quantifier does not specify a domain; when we write something like "$\forall T\in X (U(T))$", we are actually abusing notation, because the $\forall$ only quantifies over <em>all</em> objects, not over <em>some</em> objects. The above formula is actually shorthand for "$\forall T( T\in X\rightarrow U(T))$" (that is, when we say "for all $T$ in $X$, $U$ holds", we are <em>really</em> saying "for all $T$, <strong>if</strong> $T$ is in $X$ <strong>then</strong> $U$ holds"). </p>
<p>So you have gone wrong in assuming the two statements are equivalent. The second statement is actually of the form $\forall T\bigl((T\in \mathcal{P}(V)\land P(T))\rightarrow Q(T)\bigr)$ (that is, if $T$ is a subset of $V$ <em>and</em> satisfies $P$, then it satisfies $Q$). The first statement, on the other hand, is $\forall T\bigl(\mathcal{P}(V)\rightarrow (P(T)\land Q(T))\bigr)$.</p>
<p>The two statements are not equivalent: $(R\land T)\rightarrow S$ is equivalent to $R\rightarrow(T\rightarrow S)$; but $T\rightarrow S$ is not equivalent to $T\land S$, it's equivalent to $\neg T\lor S$.</p>
<p>So even though you think the two statements "say" the same thing, they really aren't because you are not writing them in full, you are writing abbreviations and those abbreviations are leading you astray.</p>
|
1,803,205 | <p>I am trying to find $\int_0^{\infty} \frac{dx}{1 + x^n}$ using contour integration. I did the computation by taking the contour $[0,R] \cup \gamma_R \cup [R e^{2i\pi/n}, 0]$, with $\gamma_R$ the arc joining $R$ to $Re^{2i\pi/n}$, and found that (for $n \ge 2$):</p>
<p>$$\int_0^{\infty} \frac{dx}{1 + x^n} = \frac{2\pi i}{(1 - e^{2i\pi/n})\prod_{k=0, k \neq 1}^{n-1} (e^{i\pi/n} - e^{i(2k-1)\pi/n})}$$</p>
<p>Apparently, this should be equal to:</p>
<p>$$\frac{\pi/n}{\sin(\pi/n)}$$</p>
<p>How to get that?</p>
| Marco Cantarini | 171,547 | <p>If I have understood correctly, you want to see why the integral is also equal to $$\frac{\pi/n}{\sin\left(\pi/n\right)}
$$ (if I am wrong tell me and I will delete the answer). A way to see it to observe that, taking $1+x^{n}=u^{-1}
$ $$\int_{0}^{\infty}\frac{1}{1+x^{n}}dx=\frac{1}{n}\int_{0}^{1}u^{-1/n}\left(1-x\right)^{1/n-1}=\frac{1}{n}\frac{\Gamma\left(1-\frac{1}{n}\right)\Gamma\left(\frac{1}{n}\right)}{\Gamma\left(1\right)}=\frac{\pi/n}{\sin\left(\pi/n\right)}
$$ for the <a href="https://en.wikipedia.org/wiki/Reflection_formula" rel="nofollow">reflection formula of the Gamma function</a>. It is possible to generalize the result $$\int_{0}^{\infty}\frac{x^{b}}{1+x^{a}}dx=\frac{\pi/a}{\sin\left(\pi\left(b+1\right)/a\right)},\, a>0,b\geq0.$$</p>
|
3,366,158 | <blockquote>
<p>find a linear transformation such that <span class="math-container">$M^2 (1- M) =0$</span> but <span class="math-container">$M$</span> is not idempotent ?</p>
</blockquote>
<p>My attempt : i take take the vector space generated by the base <span class="math-container">$\{e_1, e_2\}$</span> and define <span class="math-container">$T(e_1)=e_2$</span>,and <span class="math-container">$T(e_2)=e_1 $</span></p>
| TomGrubb | 223,701 | <p>Hint: you can assume with no loss of generality that <span class="math-container">$M$</span> is in Jordan Normal form.</p>
|
2,161,911 | <p>Find the limit :</p>
<p>$$\lim_{ n\to \infty }\sqrt[n]{\prod_{i=1}^n \frac{1}{\cos\frac{1}{i}}}=\,\,?$$</p>
<p>My try :</p>
<p>$$\lim_{ n\to \infty }\sqrt[n]{a} =1\,\, \text {for} \,\,a>0$$</p>
<p>and;</p>
<p>$$\prod_{i=1}^n \frac{1}{\cos\frac{1}{i}}>0$$</p>
<p>so :</p>
<p>$$\lim_{ n\to \infty }\sqrt[n]{\prod_{i=1}^n \frac{1}{\cos\frac{1}{i}}}=1$$</p>
<p>Is it right?</p>
| Claude Leibovici | 82,404 | <p><em>Much less elegant than RRL'answer.</em></p>
<p>Consider $$P_n=\left(\prod _{i=1}^n \sec \left(\frac{1}{i}\right)\right){}^{\frac{1}{n}}$$ $$\log(P_n)=\frac 1n \sum _{i=1}^n \log\left(\sec \left(\frac{1}{i}\right)\right)$$ Now, by Taylor for large values of $i$ $$\sec \left(\frac{1}{i}\right)=1+\frac{1}{2 i^2}+\frac{5}{24 i^4}+O\left(\frac{1}{i^6}\right)$$ $$\log\left(\sec \left(\frac{1}{i}\right)\right)=\frac{1}{2 i^2}+\frac{1}{12 i^4}+O\left(\frac{1}{i^6}\right)$$ $$\sum _{i=1}^n \log\left(\sec \left(\frac{1}{i}\right)\right)=\frac{H_n^{(2)}}{2}+\frac{H_n^{(4)}}{12}+\cdots$$ where appears harmonic numbers.</p>
<p>Using asymptotics $$\log(P_n)=\frac 1 n \left(\frac{90 \pi ^2+\pi ^4}{1080}-\frac{1}{2 n}+O\left(\frac{1}{n^2}\right)\right)$$ Taylor again $$P_n=e^{\log(P_n)}=1+\frac{90 \pi ^2+\pi ^4}{1080 n}+O\left(\frac{1}{n^2}\right)$$ which seems to be a quite reasonable approximation
$$\left(
\begin{array}{cccc}
n & P_n & \text{approximation} & \text{difference} \\
10 & 1.09393 & 1.09127 & 0.00266 \\
20 & 1.04713 & 1.04563 & 0.00149 \\
30 & 1.03145 & 1.03042 & 0.00103 \\
40 & 1.02360 & 1.02282 & 0.00078 \\
50 & 1.01889 & 1.01825 & 0.00063\\
60 & 1.01574 & 1.01521 & 0.00053 \\
70 & 1.01349 & 1.01304 & 0.00046 \\
80 & 1.01181 & 1.01141 & 0.00040 \\
90 & 1.01050 & 1.01014 & 0.00036 \\
100 & 1.00945 & 1.00913 & 0.00032
\end{array}
\right)$$</p>
|
124,498 | <p>When I produce a simple plot like so:</p>
<pre><code>Plot[Sin[x], {x, 0, 2 Pi}, GridLines -> Automatic, GridLinesStyle -> Directive[Red]]
</code></pre>
<p>It looks like this:</p>
<p><a href="https://i.stack.imgur.com/KcyZo.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KcyZo.png" alt="enter image description here"></a></p>
<p>Notice how the grid lines cross over the vertical axis on the left, going all the way through the first character of the tick labels. I see this regardless of StyleSheet used. Looks like a bug to me, but perhaps there's a way to fix this? Of course, ideally I'd also have the tick labels on the horizontal axis printed on top of the grid lines rather than the other way around, but that may be to reasonable a choice to ask for...</p>
<p><strong>Edit:</strong></p>
<p>Never mind the question on the too long gridlines; I found the answer (use <code>PlotRangePadding -> 0</code>). My question on having the gridlines in the background remains, however.</p>
| halirutan | 187 | <p>The grid-lines are directly connected to the <code>PlotRange</code> that you use. Although you only plot from 0 to <code>2*Pi</code>, <em>Mathematica</em> adds a little space around your plot. This little space, called <code>PlotRangePadding</code> is the source of this issue:</p>
<pre><code>Plot[Sin[x], {x, 0, 2 Pi},
GridLines -> Automatic,
GridLinesStyle -> Directive[Red],
PlotRangePadding -> 0
]
</code></pre>
<p><img src="https://i.stack.imgur.com/FUA52.png" alt="Mathematica graphics"></p>
<p>As for your other question</p>
<blockquote>
<p>Of course, ideally I'd also have the tick labels on the horizontal axis printed on top of the grid lines rather than the other way around</p>
</blockquote>
<p>Well, at least for me it seems that grid is indeed in the background which can be shown in a magnified screenshot. Of course, red grid-lines in combination with gray axes aren't the best design choice:</p>
<p><a href="https://i.stack.imgur.com/kEEid.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kEEid.png" alt="enter image description here"></a></p>
<p>As a final note, if you could live for instance with gray grid-lines, the complete issue doesn't look so bad after all. Even if you extend your plot further to the left:</p>
<p><img src="https://i.stack.imgur.com/X9Oom.png" alt="Mathematica graphics"></p>
|
734,248 | <p>Example of two open balls such that the one with the smaller radius contains the one with the larger radius.</p>
<p>I cannot find a metric space in which this is true. Looking for hints in the right direction. </p>
| user2345215 | 131,872 | <p>Have you considered the trivial $1$ point metric space? Every ball is the same there.</p>
<p>In general, pick any space with an isolated point. Then you can have equal balls around it with different radii.</p>
<p>Also note that every metric space can be bounded by cutting off the distance, i.e. $d'(x,y)=\min(d(x,y),1)$. Then $B(x,r)=B(x,r')$ for any $r,r'\ge1$.</p>
<p>If you are looking for a ball of larger radius strictly contained in a ball of smaller radius, consider the metric space as the interval $[-1,1]$ which is an open ball of radius $\frac43$ around $0$ which strictly contains an open ball of radius $\frac53$ around $1$.</p>
|
734,248 | <p>Example of two open balls such that the one with the smaller radius contains the one with the larger radius.</p>
<p>I cannot find a metric space in which this is true. Looking for hints in the right direction. </p>
| Henno Brandsma | 4,280 | <p>Look at the plane ($\mathbb{R}^2$) in the so-called post-office metric. Denoting the usual norm on the plane by $\|x\|$, we define this metric for two distinct points $x$ and $y$ in the plane as $d(x,y) = \|x\| + \|y\|$, and $d(x,x) = 0$, of course. One easily checks that this is a a metric.</p>
<p>The intuition is that to go from point $x$ to $y$ they need to travel via the "post-office", which is the origin $(0,0)$. One sees that every point $x \neq (0,0)$ is an isolated point, and that open balls around $(0,0)$ are the same as the usual ones.</p>
<p>Now look at $B((0,0), \frac{3}{2})$ and $B((1,0), 2)$. The latter consists of the usual open ball around $(0,0)$ of radius $1$ plus $(1,0)$ itself, which is a proper subset of the former, which has a smaller radius.</p>
|
2,105,963 | <p>Suppose ${{A_1}}$=[1,3] and ${{A_2}}$=[2,4], then ${{A_1} \cup {A_2}}$=[1,4] now $\sup \left( {{A_1} \cup {A_2}} \right)$ is clearly 4. so, $\sup \left( {{A_1} \cup {A_2}} \right) = \max \left( {\sup {A_1},\sup {A_2}} \right)$ is true.</p>
<p>Confusion with definition: <br/>
<em>s</em> is least upper bound for a set $A \subseteq R$ if two criterion are met <br/>
(1) <em>s</em> is an upper bound for <em>A</em><br/>
(2) if <em>b</em> is any upper bound for <em>A</em>, then $s \le b$ <br/>
In the proof if I take ${s_1}$ to be ${\sup {A_1}}$ and ${s_2}$ to be ${\sup {A_2}}$, then if I apply definition then least of ${s_1}$ and ${s_2}$ is $\sup \left( {{A_1} \cup {A_2}} \right)$, which is certainly not true. What is exactly am I missing here?</p>
<p>Then once proved how can I extend it to $\sup \left( { \cup _{k = 1}^n{A_k}} \right)$ ? May be if i get clear with the base case then it will not be required.<br/>
Edit:
${{A_1}}$ and ${{A_2}}$ are nonempty sets which are bounded above.</p>
| Ansel B | 397,059 | <p>It will be the case that $s1\leq s_2$ or $s_2\leq s_1$, but we can take wlog $s_1\leq s_2$ as the argument will be mirrored in the dual case. So then $s_2$ is a uper bound (ub) of both $A_1$ and $A_2$ and the union as well. So then if u is a ub of $A_1\cup A_2$ it is a ub of $A_2$ as well so then it must be that $s_2\leq u$ and so $s_2$ is the least upper bound of the union, that is $\sup(A_1\cup A_2)=\max\{\sup(A_1),\sup(A_2)\}$. And yes you are correct it can be extended to any $n\in\mathbb{N}$, by a trivial induction argument. Just to note the definition of the supremum tells you that if you want to show that $x$ is the supremum of a set $Z$, first show that $x$ is and ub of $Z$ and if $u$ is another ub of $Z$, then $x\leq u$.</p>
|
373,357 | <p>I've tought using split complex and complex numbers toghether for building a 3 dimensional space (related to my <a href="https://math.stackexchange.com/questions/372747/what-are-the-uses-of-split-complex-numbers?noredirect=1">previous question</a>). I then found out using both together, we can have trouble on the product $ij$. So by adding another dimension, I've defined $$k=\begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix}$$
with the property $k^2=1$. So numbers of the form $a+bi+cj+dk$ where ${{a,b,c,d}} \in \Bbb R^4$, $i$ is the imaginary unit, $j$ is the elementry unit of split complex numbers and k the number defined above, could be represented on a 4 dimensinal space. I know that these numbers look like the Quaternions. They are not! So far, I came out with the multiplication table below :
$$\begin{array}{|l |l l l|}\hline
& i&j&k \\ \hline
i&-1&k&j \\
j& -k&1&i \\
k& -j&-i&1 \\ \hline
\end{array}$$</p>
<p>We can note that commutativity no longer exists with these numbers like the Quaternions. When I showed this work to my math teacher he said basicaly these :</p>
<ol>
<li>It's not coherent using numbers with different properties as basic element, since $i^2=-1$ whereas $j^2=k^2=1$</li>
<li>2x2 matrices doesn't represent anything on a 4 dimensional space</li>
</ol>
<p>Can somebody explains these 2 things to me. What's incoherent here?</p>
| Anixx | 2,513 | <p>Here is the Mathematica code for your proposed system, split-quaternions, using <span class="math-container">$2\times2$</span> real matrices <span class="math-container">$i=\left(
\begin{array}{cc}
0 & 1 \\
-1 & 0 \\
\end{array}
\right),j=\left(
\begin{array}{cc}
0 & 1 \\
1 & 0 \\
\end{array}
\right), k=\left(
\begin{array}{cc}
1 & 0 \\
0 & -1 \\
\end{array}
\right) $</span>:</p>
<pre><code>Unprotect[Dot];
Dot[x_?NumberQ, y_] := x y;
Protect[Dot];
Matrix /: Matrix[x_?MatrixQ] :=
First[First[x]] /; x == First[First[x]] IdentityMatrix[Length[x]];
Matrix /: NonCommutativeMultiply[Matrix[x_?MatrixQ], y_] :=
Dot[Matrix[x], y];
Matrix /: NonCommutativeMultiply[Matrix[y_, x_?MatrixQ]] :=
Dot[y, Matrix[x]];
Matrix /: Dot[Matrix[x_], Matrix[y_]] := Matrix[x . y];
Matrix /: Matrix[x_] + Matrix[y_] := Matrix[x + y];
Matrix /: x_?NumericQ + Matrix[y_] :=
Matrix[x IdentityMatrix[Length[y]] + y];
Matrix /: x_?NumericQ Matrix[y_] := Matrix[x y];
Matrix /: Matrix[x_]*Matrix[y_] := Matrix[x . y] /; x . y == y . x;
Matrix /: Re[Matrix[x_?MatrixQ]] := Tr[x]/Length[x];
Matrix /: Conjugate[Matrix[x_?MatrixQ]] := Matrix[-x] + 2 Re[Matrix[x]]
Matrix /: Power[Matrix[x_ ?MatrixQ], y_] :=
Matrix[MatrixPower[x, y]];
$Post = FullSimplify[# /. i -> Matrix[( {
{0, 1},
{-1, 0}
} )] /. j -> Matrix[( {
{0, 1},
{1, 0}
} )] /. k -> Matrix[( {
{1, 0},
{0, -1}
} ) ] /.
f_[args1___, Matrix[mat_], args2___] :>
Matrix[MatrixFunction[f[args1, #, args2] &, mat]] /.
Matrix[{{a_, c_}, {d_, b_}}] :> (a + b)/
2 + (c - d)/2 i + (c + d)/2 j + (a - b)/
2 k ] &;
</code></pre>
<p>Test:</p>
<pre><code>In=(i + k)^2
Out=0
In=Log[2 i + 3 j + 4]
Out=1/10 (2 Sqrt[5] (2 i + 3 j) ArcCoth[4/Sqrt[5]] + Log[161051])
In=(2 i + 3 j + 4) ** (5 - k)
Out=20 + 13 i + 17 j - 4 k
</code></pre>
|
272,173 | <p>I'm looking for several references on the spectral analysis of the Laplacian operator. It is such a well-known topic, but I'm a bit struggling to locate modern systematic expositions in the literature. </p>
<p>I'd appreciate multiple suggestions that explore the topic using different approaches too.</p>
<p>I'm particularly interested in the variational characterization of the eigenvalues and eigenfunctions.</p>
| Bombyx mori | 18,850 | <p>I took a reading course based on Sogge's book "<a href="http://press.princeton.edu/titles/10290.html" rel="noreferrer">Hangzhou Lectures on Eigenfunctions of the Laplacian</a>" a long time ago. This may serve as a standard reference because most of the results mentioned in this book are well-known among researchers. </p>
<p>The subject of spectral geometry of elliptic operators is very deep, and you should be able to read more in-depth articles afterwards suitable for your interest. It is really easy to ask open questions yourself and finding them difficult to answer, and then you may learn something by reading through literature. This may sound horribly generic, but the subject is connected to many other fields of mathematics (abstract harmonic analysis, microlocal analysis, geometric analysis, Hodge theory, to name a few). So I think listing an exhaustive list may be impossible. </p>
|
820,878 | <p>I am not quite familiar with the concept of correlation.
The Pearson's correlation coefficient is defined as:</p>
<p><span class="math-container">$$\rho_{X,Y}=\mathrm{corr}(X,Y)={\mathrm{cov}(X,Y) \over \sigma_X \sigma_Y} ={E[(X-\mu_X)(Y-\mu_Y)] \over \sigma_X\sigma_Y}$$</span></p>
<p>which makes use of Mean and Standard deviation. But, is it strict to the normal distributed data ? Since Gaussian distribution is configured by mean and variance.</p>
<p>I currently have some which is apparently not following normal distribution. When assessing the correlation between them, is correlation appropriate here ?</p>
| Anatoly | 90,997 | <p>The Pearson's correlation can be used only for normal distributions. If you have some non-normal distribution, you can use the Spearman's correlation, which does not require the normality assumption.</p>
|
9,990 | <p>Consider the following problem: </p>
<ul>
<li>Maria always buys ice-cream when she goes to the beach. She bought ice-cream today. So, she must have gone to the beach. </li>
</ul>
<p>Obviously this statement is wrong. Maria could have gone to other place and bought an ice-cream. You don't need any math tool to arrive at this conclusion, all you need is reasoning. </p>
<p>However, several adults (18~50 years old) with difficulty in math, also have a really hard time to solve/understand such kind of problems. For them, learning math is the same as memorizing rules and formulas. Anything different than that (ex: reasoning) is extremely painful.</p>
<p>So, is it possible to make such students correctly answer <a href="https://en.wikipedia.org/wiki/Graduate_Management_Admission_Test">GMAT</a> style questions?</p>
| Syntax Junkie | 5,465 | <p>There is a paradox here. Implication IS tricky. Part of the reason is it's REALLY HARD to avoid implicit assumptions in human scenarios. The only absolute way to avoid assumptions is to translate the English into meaningless symbols (p, q) and rigorously apply logic rules. The rules themselves are tricky. You pretty much need to prove them once to know they are correct, then apply them by rote from then on. Your example is on the cusp what most people can handle in their head. For anything much more complex, most of us will need to manipulate symbols using logical rules to "reason correctly." </p>
<p>In your example, some of your students will intuitively see the correct premise and conclusion. Others will need some help. All of us will need some help if the scenario gets complicated enough. (Or if it is sneaky enough to trick us into making unwarranted assumptions. It happens.) I'll probably lose points for politicizing this, but consider the recent debate on gay marriage. There were some not irrational arguments along the line of "If you want to have children, then you should be married." But there was an implicit assumption of bi-implication. Not once did I hear anyone--on either side of the debate--point out that this is not the same thing as saying "If you want to get married, then you should have children." So this sort of logical error is very easy to make, every by well-trained people. </p>
<p>I agree with Daniel and his blogs. The only solution is formal training in logic. If only to make people aware of how EASY it is to make mistakes.</p>
<p>If you're looking for the most bang for your teaching buck, what I have personally found the most useful is this:</p>
<ol>
<li><p>What implication is: An if-then statement. Show how to identify the premise and the conclusion.</p></li>
<li><p>Show the differences between the INVERSE, CONVERSE, and CONTRAPOSITIVE. My first exposure to these three conditionals was a real eye-opener. It did more than anything else in my life to avoid the kind of error shown in your example.</p></li>
</ol>
|
9,990 | <p>Consider the following problem: </p>
<ul>
<li>Maria always buys ice-cream when she goes to the beach. She bought ice-cream today. So, she must have gone to the beach. </li>
</ul>
<p>Obviously this statement is wrong. Maria could have gone to other place and bought an ice-cream. You don't need any math tool to arrive at this conclusion, all you need is reasoning. </p>
<p>However, several adults (18~50 years old) with difficulty in math, also have a really hard time to solve/understand such kind of problems. For them, learning math is the same as memorizing rules and formulas. Anything different than that (ex: reasoning) is extremely painful.</p>
<p>So, is it possible to make such students correctly answer <a href="https://en.wikipedia.org/wiki/Graduate_Management_Admission_Test">GMAT</a> style questions?</p>
| Dan Fox | 672 | <p>This is a subtle issue. It goes to the heart of the difference between math and physics.</p>
<p>That A implies B does not entail that B implies A. One encounters frequently the errant reasoning that it does even among engineering students in the university (yesterday a student told me that because a matrix was diagonalizable it must be symmetric).</p>
<p>However, that A implies B and one has observed B provides evidence for believing A. This statement is somehow the basis for the scientific method.</p>
<p>In the example given in the original post, that Maria bought ice-cream today <em>does</em> provide some evidence that she might have gone to the beach today.</p>
<p>The difference is that between deductive reasoning and inference.</p>
<p>Inferential reasoning is more common, more natural, and more powerful (why does anyone believe Euclid's axioms?).</p>
<p>I quote from V. I. Arnold (<a href="https://doi.org/10.1134/S1560354714060100" rel="nofollow noreferrer">Translation of the V. I. Arnold paper “From Superpositions to KAM Theory” (Vladimir Igorevich Arnold. Selected — 60, Moscow: PHASIS, 1997, pp. 727–740)</a>), who in his inimitably polemical manner explains the issue better than I can anyway:</p>
<blockquote>
<p>Now it became possible to apply the techniques developed in the
problem of adiabatic invariants. As soon as I accomplished that,
Kolmogorov suggested that I should submit the paper on perpetual
adiabatic invariance to ZhETF, the main physical journal in the
USSR. A few weeks later, M. A. Leontovich (who was, as far as I
remember, a deputy to the editor-in-chief of ZhETF) invited me to
his home (near the Atomic Energy Institute of the USSR Academy of
Sciences) to discuss the manuscript. Having fed me, as usual, by
boiled buckwheat and calling me, as usual, “Dimka” (M. A. called me
in such a way until his death), Mikhail Aleksandrovich explained to me
that the paper could not be published in ZhETF due to the following
reasons.</p>
<ol>
<li>The manuscript contained the words “theorem” and “proof” forbidden in ZhETF.</li>
<li>The manuscript claimed that “A implies B” while every physicist knew examples showing that B does not imply A.</li>
<li>The manuscript used the unintelligible terms “Lebesgue measure”, “invariant tori”, “Diophantine conditions”. Mikhail Aleksandrovich
therefore proposed that I should rewrite the paper.</li>
</ol>
<p>Now I realize how
right he was in defending a physical journal from the Bourbaki-like
mathematical jargon. For instance, indeed, while claiming that “A
implies B” the author must point out explicitly whether the converse
holds, otherwise any reader not spoiled by the mathematical slang
would understand the claim as “A is equivalent to B”.</p>
</blockquote>
<p>The issue is that mathematical deductive reasoning is rarely if ever applicable outside of mathematics where it should be replaced by the Bayesian paradigm of what Polya called plausible inference, of which it is an extreme case.</p>
<p>The moral for teaching is that the difference between necessary and sufficient conditions is not something to be passed over lightly and that confusion in regards to it is not necessarily evidence of stupidity. When one teaches that all symmetric matrices are diagonalizable one must remind students that not all diagonalizable matrices are symmetric. Moreover, all orthogonally diagonalizable matrices are symmetric ...</p>
|
1,653,416 | <p>We know that:
<a href="https://www.youtube.com/watch?v=w-I6XTVZXww" rel="nofollow">https://www.youtube.com/watch?v=w-I6XTVZXww</a>
$$S=1+2+3+4+\cdots = -\frac{1}{12}$$</p>
<p>So multiplying each terms in the left hand side by $2$ gives:
$$2S =2+4+6+8+\cdots = -\frac{1}{6}$$
This is the sum of the even numbers</p>
<p>Furthermore, we can add it to itself but shifting the terms one place:
$$
\begin{align}
1+2+3+4+\cdots & \\ 1+2+3+\cdots & \\
=1+3+5+7+\cdots & =2S
\end{align}
$$
This is the sum of the odd numbers</p>
<p>If we were to now sum the odd numbers and the even numbers like below:
$$ 2+4+6+8+\cdots \\[6pt] 1+3+5+7+\cdots \\[6pt] \text{if we add the terms in a certain order we can get } 1+2+3+4+5+6+7+\cdots$$
This supposedly tells us that:
$$4S = S\\[6pt] 4 \left(\frac{-1}{12}\right)=\frac{-1}{12} \\[6pt] \frac{-1}{3} = \frac{-1}{12} $$</p>
<p>What is faulty with this proof.</p>
| Travis Willse | 155,629 | <p>Interpreted literally (i.e., using the usual sense of limits of infinite series), the first line,
$$1 + 2 + 3 + \cdots = -\frac{1}{12} ,$$
is simply false, as the series on the l.h.s. diverges.</p>
<p>What's true, for example, is that there's a natural way to extend the function $$Z(s) := \sum_{k = 1}^{\infty} k^{-s} ,$$ which is defined on $(1, \infty)$ (and in particular not at $s = -1$), to a function $\zeta$ defined for most complex numbers, including $s = -1$, and this function satisfies $\zeta(-1) = -\tfrac{1}{12}$. The partial sums of the series, evaluated at $s = -1$, are $1 + \cdots + n$, but this is <em>not</em> the same as saying $1 + 2 + 3 + \cdots = -\tfrac{1}{12}$.</p>
|
2,660,595 | <p>Ten people are sitting in a row, and each is thinking of a negative integer no smaller than $-15$. Each person subtracts, from his own number, the number of the person sitting to his right (the rightmost person does nothing). Because he has nothing else to do, the rightmost person observes that all the differences were positive. Let $x$ be the greatest integer owned by one of the 10 people at the beginning. What is the minimum possible value of $x$?</p>
<p>Not sure how to go about this. I think it is 1 since -14-(-15)=1. I'm not sure though. </p>
| quasi | 400,434 | <p>Proceed by induction on $n$.
<p>
For $n=1$,
$$x^{2^1}-1 = x^2-1 = (x+1)(x-1)$$
which is a multiple of $2^{1+1}=4$, since $x+1$ and $x-1$ are both even.
<p>
Thus, the base case is verified.
<p>
Next, suppose the claim holds for a fixed integer $n \ge 1$.
<p>
By the inductive hypothesis, we have $2^{n+1}{\,\mid\,}\left(x^{2^n}-1\right)$.
\begin{align*}
\text{Then}\;\;& 2^{n+1}{\,\mid\,}\left(x^{2^n}-1\right)\\[4pt]
\implies\;&\left(2^{n+1}\right)^2{\,\mid\,}\left(x^{2^n}-1\right)^2\\[4pt]
\implies\;&2^{2n+2}{\,\mid\,}\left(x^{2^n}-1\right)^2\\[4pt]
\implies\;&2^{n+2}{\,\mid\,}\left(x^{2^n}-1\right)^2\\[20pt]
\text{Also}\;\;& 2^{n+1}{\,\mid\,}\left(x^{2^n}-1\right)\\[4pt]
\implies\;&2\left(2^{n+1}\right){\,\mid\,}2\left(x^{2^n}-1\right)\\[4pt]
\implies\;&2^{n+2}{\,\mid\,}2\left(x^{2^n}-1\right)\\[4pt]
\end{align*}
hence
$$2^{n+2}{\,\mid\,}\left(\left(x^{2^n}-1\right)^2 + 2\left(x^{2^n}-1\right)\right)$$
But identically, we have
$$\left(x^{2^n}-1\right)^2 + 2\left(x^{2^n}-1\right) =x^{2^{n+1}}-1$$
hence
$$2^{n+2}{\,\mid\,}\left(x^{2^{n+1}}-1\right)$$
which completes the induction.</p>
|
1,206,195 | <p>I am trying to find the maximum of $x^{1/x}$. I don't know how to find the derivative of this. I have plugged in some numbers and found that $e^{1/e}$ seems to be the maximum at around 1.44466786. I don't know if this is the maximum, and I would like an explanation of why it is/what the maximum is. essentially, how do I solve ${{dy}\over{dx}}(x^{1/x})=0$?</p>
| evinda | 75,843 | <p>$$x^{\frac{1}{x}}=e^{\ln{x^{\frac{1}{x}}}}=e^{\frac{1}{x} \ln x}$$</p>
<p>Can you find now the derivative?</p>
|
191,796 | <blockquote>
<p>I met with the following difficulty reading the paper <a href="http://www.cnki.com.cn/Article/CJFDTotal-ZZDZ198801008.htm" rel="nofollow">Li, Rong Xiu "The properties of a matrix order column" (1988)</a>:</p>
<p>Define the matrix $A=(a_{jk})_{n\times n}$, where
$$a_{jk}=\begin{cases}
j+k\cdot i&j<k\\
k+j\cdot i&j>k\\
2(j+k\cdot i)& j=k
\end{cases}$$
and $i^2=-1$.</p>
<p>The author says it is easy to show that $rank(A)=n$. I have proved for $n\le 5$, but I couldn't prove for general $n$.</p>
</blockquote>
<p>Following is an attempt to solve this problem:
let
$$A=P+iQ$$
where
$$P=\begin{bmatrix}
2&1&1&\cdots&1\\
1&4&2&\cdots& 2\\
1&2&6&\cdots& 3\\
\cdots&\cdots&\cdots&\cdots&\cdots\\
1&2&3&\cdots& 2n
\end{bmatrix},Q=\begin{bmatrix}
2&2&3&\cdots& n\\
2&4&3&\cdots &n\\
3&3&6&\cdots& n\\
\cdots&\cdots&\cdots&\cdots&\cdots\\
n&n&n&\cdots& 2n\end{bmatrix}$$</p>
<p>and define
$$J=\begin{bmatrix}
1&0&\cdots &0\\
-1&1&\cdots& 0\\
\cdots&\cdots&\cdots&\cdots\\
0&\cdots&-1&1
\end{bmatrix}$$
then we have
$$JPJ^T=J^TQJ=\begin{bmatrix}
2&-2&0&0&\cdots&0\\
-2&4&-3&\ddots&0&0\\
0&-3&6&-4\ddots&0\\
\cdots&\ddots&\ddots&\ddots&\ddots&\cdots\\
0&0&\cdots&-(n-2)&2(n-1)&-(n-1)\\
0&0&0&\cdots&-(n-1)&2n
\end{bmatrix}$$
and $$A^HA=(P-iQ)(P+iQ)=P^2+Q^2+i(PQ-QP)=\binom{P}{Q}^T\cdot\begin{bmatrix}
I& iI\\
-iI & I
\end{bmatrix} \binom{P}{Q}$$</p>
| math110 | 38,620 | <p>I use Christian Remling idea,In fact,I can find the matrix $$B_{ij}=\min{\{i,j\}}$$eigenvalue is
$$\dfrac{1}{4\sin^2{\dfrac{j\pi}{2(n+1)}}},j=1,2,\cdots,n$$
proof:
then we have
$$B=\begin{bmatrix}
1&1&1&\ddots&1&1\\
1&2&2&\ddots&\ddots&2\\
1&2&3&3&\ddots&3\\
\vdots&\ddots&\ddots&\ddots&\ddots&\cdots\\
1&\vdots&\ddots&\ddots&n-1&n-1\\
1&2&\cdots&\cdots&n-1&n
\end{bmatrix}
$$
It is easy have
$$C=B^{-1}=\begin{bmatrix}
2&-1\\
-1&2&-1\\
0&\ddots&\ddots&\ddots\\
\vdots&\cdots&-1&2&-1\\
0&\cdots&\cdots&-1&1
\end{bmatrix}$$
and consider
$$b_{n}=|\lambda C-I|=\begin{vmatrix}
\lambda-2&1&\cdots&\cdots&0\\
1&\lambda-2&1&\cdots&0\\
\vdots&\ddots&\ddots&\ddots&\vdots\\
\cdots&\cdots&1&\lambda-2&1\\
0&\cdots&\cdots&1&\lambda-2
\end{vmatrix}
$$
so
$$b_{n+1}=(\lambda-2)b_{n}-b_{n-1},b_{1}=\lambda-2,b_{2}=(\lambda-2)^2-1$$
let $\lambda-2=-2\cos{x}$, then
$$b_{n+1}=-2\cos{x}\cdot b_{n}-b_{n-1},b_{1}=-2\cos{x},b_{2}=4\cos^2{x}-1$$
and induction have
$$b_{n}=(-1)^n\cdot\dfrac{\sin{(n+1)x}}{\sin{x}}=0\Longrightarrow x=\dfrac{j\pi}{n+1},j=1,2,\cdots,n$$
so we $B^{-1}$ with eigenvalue is
$$\lambda=2-2\cos{x}=4\sin^2{\dfrac{x}{2}}=4\sin^2{\dfrac{j\pi}{2(n+1)}}$$</p>
|
1,083,841 | <p>I have extracted the below passage from the wikipedia webpage - Point (geometry): </p>
<blockquote>
<p>In particular, the geometric points do not have any length, area, volume, or any other dimensional attribute. </p>
</blockquote>
<p>I think the above passage imply\ies that the point is zero dimensional. If it is zero dimensional, how can it form a one dimensional line? </p>
<p>Physics texts sometimes talk of lines' being made up of points, planes' being made up of lines and so forth. Clearly a line segment, thought of as a connected interval of the real numbers, cannot be built as a countable union of points. What axiom systems define the building up of a line from points, or, how do we rigorously define the building of a line from points? </p>
<hr>
<p><em>Links:</em></p>
<ol>
<li>The section one (<a href="https://www.marxists.org/reference/archive/einstein/works/1910s/relative/ch01.htm" rel="noreferrer">Physical meaning of geometrical propositions</a>) of part one of the book "Relativity: The Special and General Theory" seems to be giving Einsteins view on this matter. </li>
<li><a href="https://hsm.stackexchange.com/questions/8400/what-was-the-intended-utility-of-euclids-definitions-of-lines-and-points">What was the intended utility of Euclid's definitions of lines and points?</a></li>
</ol>
<p>Related: History of Euclidean and Non-Euclidean Geometry</p>
| Community | -1 | <p>The trick is that there's more to a line than just being made up of points -- the line is also known to live in some sort of <em>topological space</em> or some richer structure. e.g. the axioms of Euclidean geometry talk not just of points lying on lines, but that one point on a line may be between others, that line segments might be congruent, and other stuff.</p>
<p>This other stuff is important to the "lineness" of a line.</p>
<p><em>Within the context of a topological space</em>, one can give a complete description of any shape in that space by specifying which points are in the shape. Thus, the habit of describing shapes in terms of sets of points.</p>
|
1,083,841 | <p>I have extracted the below passage from the wikipedia webpage - Point (geometry): </p>
<blockquote>
<p>In particular, the geometric points do not have any length, area, volume, or any other dimensional attribute. </p>
</blockquote>
<p>I think the above passage imply\ies that the point is zero dimensional. If it is zero dimensional, how can it form a one dimensional line? </p>
<p>Physics texts sometimes talk of lines' being made up of points, planes' being made up of lines and so forth. Clearly a line segment, thought of as a connected interval of the real numbers, cannot be built as a countable union of points. What axiom systems define the building up of a line from points, or, how do we rigorously define the building of a line from points? </p>
<hr>
<p><em>Links:</em></p>
<ol>
<li>The section one (<a href="https://www.marxists.org/reference/archive/einstein/works/1910s/relative/ch01.htm" rel="noreferrer">Physical meaning of geometrical propositions</a>) of part one of the book "Relativity: The Special and General Theory" seems to be giving Einsteins view on this matter. </li>
<li><a href="https://hsm.stackexchange.com/questions/8400/what-was-the-intended-utility-of-euclids-definitions-of-lines-and-points">What was the intended utility of Euclid's definitions of lines and points?</a></li>
</ol>
<p>Related: History of Euclidean and Non-Euclidean Geometry</p>
| Simon S | 21,495 | <p>It's a good question. Here's one approach that is broadly consistent with modern measure theory:</p>
<p>Start with a line segment of length $1$. If we halve its length $n$ times, then the resulting line segment has length of $1/2^n$ and that is always greater than the length of a point in the line. Write $L(point)$ for that quantity, $L$ for Length.</p>
<p>Then whatever $L(point)$ is (and assuming it is defined), we have</p>
<p>$$0 \leq L(point) < \frac{1}{2^n}$$</p>
<p>As $n$ is arbitrary, we can make $1/2^n$ as small as we like. The only viable conclusion is that $L(point) = 0$.</p>
<hr>
<p>Building up the other way from the point to a line segment is problematic. How can we multiply zero by anything and get something greater than zero? We can't without throwing out the real numbers as we understand them. That is too high a price. This is why the argument starts with non-zero quantities and goes to down zero.</p>
|
1,083,841 | <p>I have extracted the below passage from the wikipedia webpage - Point (geometry): </p>
<blockquote>
<p>In particular, the geometric points do not have any length, area, volume, or any other dimensional attribute. </p>
</blockquote>
<p>I think the above passage imply\ies that the point is zero dimensional. If it is zero dimensional, how can it form a one dimensional line? </p>
<p>Physics texts sometimes talk of lines' being made up of points, planes' being made up of lines and so forth. Clearly a line segment, thought of as a connected interval of the real numbers, cannot be built as a countable union of points. What axiom systems define the building up of a line from points, or, how do we rigorously define the building of a line from points? </p>
<hr>
<p><em>Links:</em></p>
<ol>
<li>The section one (<a href="https://www.marxists.org/reference/archive/einstein/works/1910s/relative/ch01.htm" rel="noreferrer">Physical meaning of geometrical propositions</a>) of part one of the book "Relativity: The Special and General Theory" seems to be giving Einsteins view on this matter. </li>
<li><a href="https://hsm.stackexchange.com/questions/8400/what-was-the-intended-utility-of-euclids-definitions-of-lines-and-points">What was the intended utility of Euclid's definitions of lines and points?</a></li>
</ol>
<p>Related: History of Euclidean and Non-Euclidean Geometry</p>
| Tyler Durden | 63,397 | <p>I assume you mean a line segment, not a line.</p>
<p>A line segment is not a "set of points". Euclid defines a line segment as a length without width. In other words, a line segment is defined as its length, not as a set of points.</p>
|
1,299,127 | <p>Can someone please help me answer this question as I cannot seem to get to the answer.
Please note that the Cauchy integral formula must be used in order to solve it.</p>
<p>Many thanks in advance!
\begin{equation*}
\int_{|z|=3}\frac{e^{zt}}{z^2+4}=\pi i\sin(2t).
\end{equation*}</p>
<p>Also $|z| = 3$ is given the counterclockwise direction.</p>
| Alex Fok | 223,498 | <p>Since all the poles of the integrand are enclosed by the contour, we have
\begin{eqnarray}
\int_{|z|=3}\frac{e^{zt}}{z^2+4}dz=2\pi i\text{Res}_{z=0}\frac{1}{z^2}\frac{e^{\frac{t}{z}}}{\frac{1}{z^2}+4}=2\pi i\text{Res}_{z=0}\frac{e^{\frac{t}{z}}}{1+4z^2}
\end{eqnarray}
Note that
$\displaystyle \frac{e^{\frac{t}{z}}}{1+4z^2}=\left(\sum_{n=0}^\infty (-1)^n4^nz^{2n}\right)\left(\sum_{n=0}^\infty \frac{t^n}{n! z^n}\right)$. So $\displaystyle\text{Res}_{z=0}=\sum_{n=0}^\infty \frac{(-1)^n4^nt^{2n+1}}{(2n+1)!}=\frac{1}{2}\sin 2t$</p>
|
1,299,127 | <p>Can someone please help me answer this question as I cannot seem to get to the answer.
Please note that the Cauchy integral formula must be used in order to solve it.</p>
<p>Many thanks in advance!
\begin{equation*}
\int_{|z|=3}\frac{e^{zt}}{z^2+4}=\pi i\sin(2t).
\end{equation*}</p>
<p>Also $|z| = 3$ is given the counterclockwise direction.</p>
| Kaj Hansen | 138,538 | <p>You should know that using the residue theorem would be easier, but if we are to restrict ourselves to <a href="http://en.wikipedia.org/wiki/Cauchy%27s_integral_formula" rel="nofollow noreferrer"><strong>Cauchy's integral formula</strong></a>, then here's one way of attacking it:</p>
<p>First, note that the integrand is a quotient of two entire functions. As such, the integrand is analytic everywhere except the points at which the denominator is zero. Since $z^2+4$ can be factored as $(z-2i)(z+2i)$, then the only points at which the integrand is not analytic are $\pm 2i$. Unfortunately, both of these points are inside the circle $|z| = 3$, so in order to apply Cauchy's integral formula, we will have to be clever.</p>
<p>Let $C$ be the circle $|z| = 3$ oriented counterclockwise. Let $C_1$ be the upper half of the circle $|z| = 3$ together with the line segment $[-3, 3]$ oriented counterclockwise. Let $C_2$ be the lower half together with the line segment $[-3, 3]$ oriented counterclockwise. Notice we have:</p>
<p>$$\int_C \frac{e^{zt}}{z^2+4} \ dz = \int_{C_1} \frac{e^{zt}}{z^2+4} \ dz + \int_{C_2} \frac{e^{zt}}{z^2+4} \ dz$$</p>
<p>Now we can attack the two integrals on the right hand side separately using Cauchy's integral formula. For the first, e.g., you can let $\displaystyle f(z) = \frac{e^{zt}}{z+2i}$, which is analytic everywhere inside $C_1$, and your integrand becomes $\displaystyle \frac{f(z)}{z-2i}$. </p>
<hr>
<p><strong>Alternative method</strong>: </p>
<p>Using partial fraction decomposition, we have $\displaystyle \frac{1}{z^2+4} = \frac{i}{4(z+2i)} - \frac{i}{4(z-2i)}$. Hence:</p>
<p>$$\int_C \frac{1}{z^2+4} \ dz = \int_C \frac{i}{4(z+2i)} \ dz - \int_C \frac{i}{4(z-2i)} \ dz$$</p>
<p>And then one can apply Cauchy's integral formula on the two separate pieces without having to split the contour.</p>
|
2,873,474 | <p>My staff room is having a debate about the construction of sample spaces.</p>
<blockquote>
<p>When you toss a coin twice, do you consider the sample space to be
$$\{H,H\}, \{H,T\}, \{T,T\}$$ or $$\{H,H\}, \{H,T\}, \{T,H\},\{T,T\}$$</p>
</blockquote>
<p>In my humble opinion, I feel there is no single correct answer at the moment because we do not have enough information. My feeling is that a sample space can only be established here if the order is relevant to the question at hand. In the absence of this information, either one could be the sample space.</p>
<p>However, I would like the community's thoughts on this. Is there in fact a single correct answer? Is there a mathematical reason for it being that answer?</p>
| Patrick | 317,074 | <p>It depends what the sample space is <strong>for</strong>. </p>
<p>If you are tossing 2 coins and just counting the number of heads/tails that happened then your first sample space would be correct. I.e you <strong>don't</strong> care about the order that the events occurred.</p>
<p>If you are tossing 2 coins and recording the 1st outcome and 2nd outcome separately then the second sample space would be correct. I.e. you <strong>do</strong> care about the order that the events occurred.</p>
<p>Alternatively, you could only be recording whether a head was flipped at all with either coin in which case the sample space becomes (with your notation):</p>
<blockquote>
<p>{0}, {H}</p>
</blockquote>
<p>So to summarize, a sample space is defined as the set of all possible measured outcome. So it's contents depend upon what you are measuring.</p>
|
3,493,151 | <p>This is a calculus problem from a high school math contest in Greece,from 2012.</p>
<p>I wish to know some solutions for this. I attempted to solve it.</p>
<blockquote>
<p>Let <span class="math-container">$f:\Bbb{R} \to \Bbb{R}$</span> differentiable such that <span class="math-container">$\lim_{x \to +\infty}f(x)=+\infty$</span> and <span class="math-container">$\lim_{x \to +\infty}\frac{f'(x)}{f(x)}=2$</span>.Show that <span class="math-container">$$\lim_{x \to +\infty}\frac{f(x)}{x^{2012}}=+\infty$$</span></p>
</blockquote>
<p>Here is my attempt:</p>
<p><span class="math-container">$\frac{f(x)}{x^{2012}}=e^{\ln{\frac{f(x)}{x^{2012}}}}$</span></p>
<p>Now <span class="math-container">$\ln{\frac{f(x)}{x^{2012}}}=\ln{f(x)}-2012\ln x=\ln{x}\left( \frac{\ln{f(x)}}{\ln{x}}-2012\right)$</span></p>
<p>Now from hypothesis we see that <span class="math-container">$\lim_{x \to +\infty}\ln{f(x)}=+\infty$</span></p>
<p>By L'Hospital's rule we have that <span class="math-container">$$\lim_{x \to +\infty}\frac{\ln{f(x)}}{\ln{x}}=\lim_{x \to +\infty}x \frac{f'(x)}{f(x)}=2(+\infty)=+\infty$$</span></p>
<p>Thus <span class="math-container">$$\lim_{x \to +\infty}\ln{x}\left( \frac{\ln{f(x)}}{\ln{x}}-2012\right)=+\infty$$</span></p>
<p>Finally <span class="math-container">$\lim_{x \to +\infty}\frac{f(x)}{x^{2012}}=+\infty$</span></p>
<p>Is this solution correct?</p>
<p>If it is,then are there also better and quicker ways to solve this?</p>
<p>Thank you in advance.</p>
| Peter Szilas | 408,605 | <p>Credit to user622002.</p>
<p>Show <span class="math-container">$\lim_{x \rightarrow \infty} \dfrac{f(x)}{x^n}=\infty$</span>, <span class="math-container">$n \in \mathbb{N}$</span>.</p>
<p>Induction:</p>
<p>Base case: <span class="math-container">$n=0$</span> √.</p>
<p>Hypothesis:</p>
<p><span class="math-container">$\lim_{x \rightarrow \infty} \dfrac{f(x)}{x^n}=\infty$</span>;</p>
<p>L'Hospital:</p>
<p><span class="math-container">$\lim_{x \rightarrow \infty}\dfrac{f(x)}{x^{n+1}}=$</span></p>
<p><span class="math-container">$\lim_{x \rightarrow \infty}\dfrac{f(x)(f'(x)/f(x))}{(n+1)x^n}$</span>;</p>
<p>For large enough <span class="math-container">$x$</span>: <span class="math-container">$f'(x)/f(x)>1$</span>;</p>
<p><span class="math-container">$(1/(n+1))\dfrac{f(x)}{x^n}\lt$</span></p>
<p><span class="math-container">$ \dfrac{f(x)(f'(x)/f(x)}{(n+1)x^n}$</span>.</p>
<p>Taking limits, invoking the hypothesis for the left hand side, we get</p>
<p><span class="math-container">$\lim_{x \rightarrow \infty} \dfrac{f(x)}{x^{n+1}}=\infty$</span>.</p>
|
4,054,428 | <p>The question is</p>
<p>Let <span class="math-container">$X$</span> be a discrete random variable with probability mass function</p>
<p><a href="https://i.stack.imgur.com/DhJVI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DhJVI.png" alt="enter image description here" /></a></p>
<p>(a) Find <span class="math-container">$E(X)$</span> <br/>I did <span class="math-container">$E(X) = -2*.3+ -1*6 + 12*.1 = -5.4$</span><br/>
(b) Find <span class="math-container">$Var(X)$</span> <br/> I did <span class="math-container">$E(x^{2})-(E(x))^{2}= (-2^{2} * .3 + -1*.6 + 12*.1) - (-5.4)^{2} = -29.76$</span> <br/>
(c) Find expected value of X(bar) (the sample mean) <br/> Would this just be the same thing as <span class="math-container">$E(X)$</span>?
(d) if n = 100, what is the variance of X(bar)? <br/> would I just multiply the variance by 100? <br/></p>
<p>Last question is if I wanted to put this all in R, how would I input my CDF into R and find the expected value and variance, etc. I understand it on paper (most of the time) but I am unsure how I would put it in R, any help is appreciated, thank you in advance.</p>
| RobertTheTutor | 883,326 | <p>You forgot to square the x's before multiplying and then adding.
<span class="math-container">$$E[X^2] = (-2)^2\cdot 0.3 + (-1)^2 \cdot 0.6 + (12)^2\cdot 0.1 = 1.2 + 0.6 + 14.4 = 16.2$$</span>
When you multiply a variable by <span class="math-container">$c$</span>, you multiply the mean of it by <span class="math-container">$c$</span> and the standard deviation by <span class="math-container">$c$</span>, which means the variance is multiplied by <span class="math-container">$c^2$</span>. The expected value of a sum is the sum of the expected values.</p>
<p>In R, you could construct a vector of <span class="math-container">$X$</span> values and a vector of <span class="math-container">$p$</span> values</p>
<p><span class="math-container">$x \leftarrow c(-2,-1,12)$</span></p>
<p><span class="math-container">$p \leftarrow c(0.3, 0.6, 0.1)$</span></p>
<p>I'm afraid I don't know the sampling functions in R very well yet. I have used the ones in the rethinking module.</p>
|
2,894,376 | <blockquote>
<p>$2$ different History books, $3$ different Geography books and $2$ different Science books are placed on a book shelf. How many different ways can they be arranged? How many ways can they be arranged if books of the same subject must be placed together?</p>
</blockquote>
<p>For the first part of the question I think the answer is </p>
<p>$$(2+3+2)! = 5040 \text{ different ways}$$</p>
<p>For the second part of the question I think that I will need to multiply the different factorials of each subject. There are $2!$ arrangements for science, $3!$ for geography and $2!$ for history. Am I correct in saying that the number of different ways to place the books on the shelf together by subject would be
$$2! \times 3! \times 2! = 24 \text{ different ways}$$ </p>
| Ary Jazz | 587,160 | <p>How many ways can the books be arranged? As you said = $(2+3+2)! = 7!$</p>
<p>If the books of the same subject need to be arranged together you need to calculate de permutations for the groups and multiply them by the permutations within every category.</p>
<p>$3! (2! \times 3! \times 2!) = 144$ ways</p>
<p>Groups permutatios x (history permutations x geography permutations x science permutations)</p>
|
118,029 | <p>It is well known that a generic hypersurface of degree $2n-3$ in $\mathbb CP^n$ has finite number of lines. I would like to ask a couple of questions about lines on Fermat hypersurfaces and their symmetries: </p>
<p>$$\sum_{i=1}^{n+1}x_i^{2n-3}=0.$$</p>
<p>Fermat hypersurfaces have a group of automorphisms of order $(2n-3)^n(n+1)!$. In the case $n=3$ (the case of cubic) this group is acting transitively on the collection of $27$ lines and this rases some questions.</p>
<p>The first question is pedagogical, I plan to use it for teaching and really want to know the answer. </p>
<p><em>Question 1.</em> Is there some slick way to give a high-school proof of the fact that there are exactly $27$ lines on Femat cubic in $\mathbb CP^3$ using (or not) the symmetries of the cubic but without using any theory at all?</p>
<p>Further questions are not for teaching, I am just curious about them.</p>
<p><em>Question 2.</em> Is it known that a Fermat hypersurface of degree $2n-3$ has finite number of lines for any
$n$? Is it known that these lines are never multiple?</p>
<p><em>Question 3.</em> Can one say something about the number of orbits of the action of symmetries on lines on
a Fermat hypersurface of degree $2n-3$? For example, what happen in the case of quintic, $n=5$? According
to wiki a generic quintic has $2875=125\cdot 23$ lines, so if Fermat quintic is generic, there should be more than one orbit in the action on lines on it. What is the number of orbits?</p>
<p>I would be happy to know the answer on any of these questions.</p>
| Johannes Nordström | 13,061 | <p>Regarding question 1, any line in the Fermat cubic $C = \{X_0^3 + X_1^3 + X_2^3 + X_3^3 = 0\}$ must meet the coordinate hyperplane $H_0 = \{X_0 = 0\}$. So which points $x \in (C \cap H_0)$ can lie on lines? If $Y, Z$ are homogenous coordinates on $T_x(C \cap H_0) \cong \mathbb{P}^1$, then the restriction of $X_0^3 + X_1^3 + X_2^3 + X_3^3$ to $T_x C$ is of the form $X_0^3 + F(Y,Z)$ for a homogeneous cubic $F$. For $x$ to lie on a line, $X_0^3 + F$ must factorise, so $F$ is a cube. This means that $x$ is an inflection point of the plane cubic curve $C \cap H_0 = \{X_1^3 + X_2^3 + X_3^3 = 0\}$. The inflection points are given by intersection with the zero set of the Hessian determinant $216X_1X_2X_3$. Hence the intersection of any line in $C$ with any coordinate hyperplane must actually have two corrdinates equal to 0, and it follows that the lines consist of $\{X_0^3 + X_1^3 = X_2^3 + X_3^3 = 0\}$ and its two images under permutating the coordinates (9 lines in each).</p>
<p>P.S. Here is a related exercise I like. Once one has identified the 27 lines in the Fermat cubic $C$, one can use the symmetries of $C$ to guess how to arrange 6 points in $\mathbb{P}^2$ so that the blow-up is isomorphic to $C$, and then write down an explicit rational map $\mathbb{P}^2 \dashrightarrow \mathbb{P}^3$ that maps birationally onto $C$.</p>
|
275,820 | <p>I have a question concerning the definition of the square root of bounded linear operators. To introduce some notation: tr denotes the trace of linear operators and $\mathcal{L}(H)$ denotes the set of bounded linear operators, from H to H, where H symbolizes a Hilbert space. L' stands for the adjoint operator of L.
We introduced the the space of trace class operators in the following way: $\{L \in \mathcal{L}(H) \ : \ tr((LL')^{\frac{1}{2}} < \infty \}$.
The problem is, that I don't know how how square root of the above mentioned operator is defined.</p>
| Martin Argerami | 22,857 | <p>It is a well-known fact that every bounded positive operator on a Hilbert space (which are exactly those of the form $LL'$ for some bounded $L$) has a unique positive square root. This is a consequence of the <em>continuous functional calculus</em>; you can find about in any Functional Analysis book that talks about C$^*$-algebras. </p>
|
215,531 | <p>I must solve following inequation:</p>
<p>$\frac{x-3}{1-2x}<0$</p>
<p>Now the text says that I have to solve the inequation "direct" without solving the according equations.</p>
<p>What does that mean?</p>
<p>I would say that I have to multiply by $(1-2x)$ then I get</p>
<p>$x-3<0$ and </p>
<p>$L_1 = [x \le 2]$</p>
<p>$L_2 = [x >0] $</p>
<p>but I have the feeling that I'm doing something wrong.
Especially I don't understand what the paragraph about solving the inequation "direct" means. Could someone maybe explain that to me?</p>
| Berci | 41,488 | <p>If you mulitply an inequality by a <em>negative</em> number, it will exchange the signs $>$ and $<$ (because for example $1<2$ but $-1>-2$...)
So it is not the best to multiply by the denominator unless you're sure it is positive!</p>
<p>I think, by 'direct', they meant as starting point to think about, when a fraction $\displaystyle\frac AB$ will be negative in general.. (If $A>0$ and $B<0$ or $A<0$ and $B>0$.)</p>
|
215,531 | <p>I must solve following inequation:</p>
<p>$\frac{x-3}{1-2x}<0$</p>
<p>Now the text says that I have to solve the inequation "direct" without solving the according equations.</p>
<p>What does that mean?</p>
<p>I would say that I have to multiply by $(1-2x)$ then I get</p>
<p>$x-3<0$ and </p>
<p>$L_1 = [x \le 2]$</p>
<p>$L_2 = [x >0] $</p>
<p>but I have the feeling that I'm doing something wrong.
Especially I don't understand what the paragraph about solving the inequation "direct" means. Could someone maybe explain that to me?</p>
| Mikasa | 8,581 | <p>Besides to above solutions see the table below:</p>
<p><img src="https://i.stack.imgur.com/73aXH.jpg" alt="enter image description here"></p>
<p>It shows that your desire interval would be $(-\infty, 1/2)\cup (3,+\infty)$</p>
|
35,230 | <p>This happens frequently, both on the main site and on meta: An old question pops back up on the front page, I open it, the text under the title says "Modified today", but when I check the timeline the last event is years ago, even if I show vote summaries.</p>
<p>Here's the <a href="https://math.stackexchange.com/questions/924266/definition-of-bilinear-maps">latest one</a> for me. This is happening as of 12:00 noon, EDT.</p>
<p>My guesses are</p>
<ul>
<li>I don't have high enough rep to see the latest event. (I have ~5k.)</li>
<li>The event showing up in the timeline query is lagging behind the update of the "Last Modified" field, or some other database jiggery-pokery.</li>
<li>???</li>
</ul>
<p>Do others see this happening too? Does anyone know why it happens?</p>
<hr />
<p>Adding the key points from MartinSleziak's thorough answer below, for future readers:</p>
<ul>
<li>Questions get bumped by events on answers, which have their own, separate timelines.</li>
<li>Use the "Modified" value, under the title, to take you to the latest activity. It is a link, despite not looking at all like a link.</li>
</ul>
| Eric Wofsey | 86,856 | <p>A question will be labelled as "modified [time]" on the front page if either the question or one of its <em>answers</em> was modified. For instance, in the example you linked <a href="https://math.stackexchange.com/a/924288/86856">this answer</a> was edited today.</p>
|
136,453 | <p>For every $k\in\mathbb{N}$, let
$$
x_k=\sum_{n=1}^{\infty}\frac{1}{n^2}\left(1-\frac{1}{2n}+\frac{1}{4n^2}\right)^{2k}.
$$
Calculate the limit $\displaystyle\lim_{k\rightarrow\infty}x_k$.</p>
| Brett Frankel | 22,405 | <p>I would start by saying that much of number theory, to the extent that you can really describe number theory in a single sentence, is devoted to solving equations with integer solutions. And it turns out that understanding how these equations behave with respect to primes is often the key to understanding how they behave with respect to all integers.</p>
<p>Prime numbers are also important in understanding a great many concepts in abstract algebra, which generalize way beyond number theory. For example, if we look at an object with a finite number of symmetries, primes play key large role in understanding such symmetries (what I'm alluding to here is finite group theory).</p>
<p>Also, much of the abstract machinery developed to understand primes has found applications in other areas, particularly geometry. When we look at the set of zeros of a polynomial or complex analytic function in space, we do so by trying to understand the irreducible components of these sets, which correspond to something called "prime ideals."</p>
<p>I won't even get into the Riemann Hypothesis, except to say that a single conjecture (which we are unable to prove at present) is simultaneously a statement about primes, complex functions, convergence of series, and random matricies, to name just a few of the myriad formulations.</p>
<p>But perhaps the best reason to study primes is that they are simultaneously elementary and mysterious. It's remarkable just how little we know about these numbers after pondering them for millennia. For many mathematicians, that alone is sufficient motivation.</p>
|
285,841 | <p>I would like to solve the following problem:</p>
<p>$$\begin{array}{ll} \text{minimize} & \mathbf{x}^T \mathbf{A} \mathbf{x}\\ \text{subject to} & \mathbf{x}^T\mathbf{B}\mathbf{x} = 0\\ & \mathbf{x}^T \mathbf{x} = 1\end{array}$$</p>
<p>where $\bf x$ is a vector, $\bf A, \bf B$ are square matrices, and $\bf A$ is symmetric. </p>
<hr>
<p>Here is my thinking:</p>
<p>Use the Lagrange multiplier method,
\begin{equation}
\mathcal L (\bf x, \lambda, \mu) = \mathbf{x}^T \mathbf{A} \mathbf{x} - \lambda \mathbf{x}^T\mathbf{x} - \mu \mathbf{x}^T \mathbf{B} \mathbf{x}.
\end{equation}
Take the derivative with respect to $\bf x$, we get:
\begin{equation}
\bf{A x = \lambda x + \mu Bx}
\end{equation}
This is not exactly an eigenvalue problem or a generalized one. What's next?</p>
<p>I can apply the constraints and get $\lambda = \bf x^TAx$, $\mu = \bf x^TB^TAx/(x^TB^TBx)$. But I am looking for a method that can turn the problem to a linear problem, e.g. generalized eigenvalue problem, so that I can apply the standard numerical linear algorithms.
In principle, if I can solve $\det (A-\lambda I - \mu B) = 0$, I can
eliminate, say, $\mu$. But this is not feasible, numerically. A perturbative solution with $|\mu|\ll 1$ is acceptable. </p>
<p><em>Question: Are there any methods, ideally using standard numerical linear algorithm, to solve this problem?</em></p>
<hr>
<p>These problems are similar but not the same:</p>
<p><a href="https://mathoverflow.net/questions/184538/linearly-constrained-eigenvalue-problem">Linearly constrained eigenvalue problem</a> </p>
<p>Thank you in advance. </p>
<p><strong>Edit</strong>: In viewing of the comments, I removed the "full rank" condition and does not requires $\bf A$ to be "positively defined". Hopefully, the problem may have a solution? </p>
<p>The background of the problem is as follows:
$\bf A$ is a Hamiltonian. $\bf x$ is its eigenvector with lowest energy. $\bf x^T Bx = 0$ represents a constraint imposed by a symmetry. In practice, $\bf A$ is truncated, and $\bf x^T B x \ne 0$. </p>
<p>Now, I am trying to reformulate the problem to guarantee the symmetry constraint $\bf x^T B x = 0$. As a result, $\bf x$ may not be an eigenvector of $\bf A$, which is the price to pay. My hope is that as the symmetry violation is small enough, the problem may still have an efficient solution. Hope this helps. </p>
| Igor Rivin | 11,142 | <p>Generically, your system will have no solution, since $\mathbf{x}^t B \mathbf{x}$ is rarely zero for full-rank matrices. In the special case where $B$ is a degenerate symmetric matrix, then $x$ is in the null-space of $B,$ but then your Lagrange multiplier equation seems to indicate that $x$ has to <em>also</em> be an eigenvector of $A,$ which, again, seems highly rare.</p>
|
198,132 | <p>Putting the equation $x^2 - x \sin(x) - \cos (x)$ into Wolfram Alpha, I am surprised that it has a nice <a href="http://www.wolframalpha.com/input/?i=roots%20of%20x%5E2%20-%20x%20sinx%20-%20cos%20x" rel="nofollow">parabolic shape</a>. Also, it has two complex roots.</p>
<p><strong>Question</strong></p>
<p>Is it possible to tell, in a simple way, that it has no real roots?</p>
| Bombyx mori | 32,240 | <p>I think you are confused by treating this as a standard quadratic which can only have two real roots or a pair of conjugate complex roots. But this is not. As others commented it is more appropriate to use calculus to detect the distribution of roots on the real line. In the complex case your equation become $$z^{2}-z\frac{e^{iz}-e^{-iz}}{2}-\frac{e^{iz}+e^{-iz}}{2}$$ this equation is trancendatal and probably do not have easy solutions. </p>
|
198,132 | <p>Putting the equation $x^2 - x \sin(x) - \cos (x)$ into Wolfram Alpha, I am surprised that it has a nice <a href="http://www.wolframalpha.com/input/?i=roots%20of%20x%5E2%20-%20x%20sinx%20-%20cos%20x" rel="nofollow">parabolic shape</a>. Also, it has two complex roots.</p>
<p><strong>Question</strong></p>
<p>Is it possible to tell, in a simple way, that it has no real roots?</p>
| copper.hat | 27,978 | <p>$f(x) = x^2-x\sin x-\cos x$ is even, $f(0) < 0$ and $\lim_{x \to \infty} f(x) = \infty$, hence the equation has at least two real roots. Also, $f'(x) = x(2-\cos x)$ satisfies $f'(x) \geq x \geq 0$ for $x \geq 0$, hence these are the only real roots.</p>
|
2,617,621 | <p>$$z^4 =\lvert z \lvert , z \in \mathbb{C}$$</p>
<p>Applying the formula to calculate $ \sqrt[4]{z} $, I find that solutions have to have this form:</p>
<p>$$z=\sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \frac{\pi}{2}}=i \ \sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \frac{3 \pi}{2}}=-i \ \sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \pi}=-\sqrt[4]{\lvert z \lvert}$$</p>
<p><br> </p>
<p>Using the Cartesian form:</p>
<p>$$(a+i b)^4=\sqrt{a^2+b^2}$$</p>
<p><br></p>
<p>$z=0$ is a solution</p>
<p><br></p>
<p>If $a=0$ : $$(i b)^4=\lvert b \lvert $$
$$b^4=\lvert b \lvert$$</p>
<p><br></p>
<p>$-i$ and $i$ are solutions</p>
<p><br></p>
<p>If $b=0$ : $$a^4=\lvert a \lvert $$</p>
<p><br></p>
<p>$-1$ and $1$ are solutions. </p>
<p><br></p>
<p>Finally, these are the solutions of $z^4=\lvert z \lvert$: $$0,-i,i,1,-1$$ </p>
<p>Is it correct? Thanks!</p>
| Arthur | 15,500 | <p>You are right that those are all the solutions, but I think there are easier ways to get there. For instance, I generally try to hold off on substituting $z = a+bi$ for as long as possible, since it rarely makes things nicer.</p>
<p>First off, begin by taking absolute values on both sides, giving $|z| = |z|^4$. This has the benefit of making the equation about a single, real number $|z|$ instead of one complex number $z$ and one real number $|z|$, which means it will be easier to extract useful information.</p>
<p>Since $|z|$ is a non-negative, real number, we must have either $|z| = 0$ or $|z| = 1$. Clearly $z = 0$ solves the original equation, so we look at what's left: $|z| = 1$.</p>
<p>If $|z| = 1$, the original equation reads $z^4 = 1$. This is well-known to have the four solutions $\pm 1, \pm i$. We insert back into the original equation to double-check that they are indeed solutions, and we're done.</p>
|
178,666 | <p>For every natural number n, let:</p>
<ul>
<li><p>Gn be the number of distinct group structures with at most n elements;</p></li>
<li><p>An be the number of distinct abelian group structures wit at most n elements;</p></li>
<li><p>Sn be the number of distinct solvable group structures with at most n elements.</p></li>
</ul>
<p>Question 1: Is there a known limit for the quotient An/Gn ?</p>
<p>Question 2: Is there a known limit for the quotient Sn/Gn ?</p>
| Geoff Robinson | 14,450 | <p>(Edited following Emil Jerabek's coment below) From results of L. Pyber (and implicitly, C. Sims) it appears likely that $\frac{f(n)}{g(n)} \to 1$ as $n \to \infty,$ where $f(n)$ is the number of isomorphism types of nilpotent groups of order $n$ and $g(n)$ is the number of isomorphism types of all groups of order $n,$ so minor modifications should yield the same answer for question 2 (which is a cumulative version- note also that all nilpotent groups are solvable).
Also, the asymptotic behaviour of the number of isomorphism types of Abelian groups of order $n$ and the number of isomorphism types of nilpotent groups of order $n$ are known: both are multiplicative, so it suffices to consider the case of $p$-groups. The number of isomorphism types of Abelian groups of order $p^{k}$ is $p(k),$ the number of partitions of $k,$ which behaves like $e^{c \sqrt{k}}$ for some (known!) constant $c.$ The number of isomorphism types of groups of order $p^{k}$ is asymptotically around $p^{\frac{2k^{3}}{27}}$ (proved by C. Sims and G. Higman). This suggests that the limit of question 1 should be zero, though again you ask for a cumulative version.</p>
|
52,480 | <p>The question comes from a statement in Concrete Mathematics by Graham, Knuth, and Patashnik on page 465.</p>
<p>$$\sum_{k \geq n} \frac{(\log k)^2}{k^2} = O \left(\frac{(\log n)^2}{n} \right).$$</p>
<p>How is this calculated?</p>
| Shai Covo | 2,810 | <p>Note that the function $f$ defined by
$$
f(x) = \frac{{(\log x)^2 }}{{x^2 }}
$$
is decreasing for $x > x_0$ (for some $x_0 > 0$), and that
$$
\frac{{\frac{d}{{dx}}\int_x^\infty {\frac{{(\log t)^2 }}{{t^2 }}dt} }}{{\frac{d}{{dx}}\frac{{(\log x)^2 }}{x}}} = \frac{{ - \frac{{(\log x)^2 }}{{x^2 }}}}{{\frac{{2\log x - (\log x)^2 }}{{x^2 }}}} = \frac{{ - \log x}}{{2 - \log x}} \to 1 \;\; {\rm as} \;\; x \to \infty ,
$$
hence also
$$
\frac{{\int_x^\infty {\frac{{(\log t)^2 }}{{t^2 }}dt} }}{{\frac{{(\log x)^2 }}{x}}} \to 1 \;\; {\rm as} \;\; x \to \infty.
$$</p>
<p>EDIT (elaborating; what follows also completes mixedmath's answer):</p>
<p>With $f$ as above,
$$
f'(x) = \frac{{2\log x(1 - \log x)}}{{x^3 }},
$$
implying that $f$ is decreasing on $[e,\infty)$. It follows that for any $n \geq 3$,
$$
\int_n^\infty {f(x)\,dx} \leq \sum\limits_{k = n}^\infty {f(k)} \leq f(n) + \int_n^\infty {f(x)\,dx} .
$$
Hence
$$
\frac{{\int_n^\infty {f(x)\,dx} }}{{\frac{{(\log n)^2 }}{n}}} \le \frac{{\sum\nolimits_{k = n}^\infty {f(k)} }}{{\frac{{(\log n)^2 }}{n}}} \le \frac{{f(n)}}{{\frac{{(\log n)^2 }}{n}}} + \frac{{\int_n^\infty {f(x)\,dx} }}{{\frac{{(\log n)^2 }}{n}}}.
$$
So from
$$
\frac{{f(n)}}{{\frac{{(\log n)^2 }}{n}}} = \frac{1}{n} \to 0
$$
and
$$
\frac{{\int_n^\infty {\frac{{(\log x)^2 }}{{x^2 }}\,dx} }}{{\frac{{(\log n)^2 }}{n}}} \to 1
$$
as $n \to \infty$, it follows that
$$
\frac{{\sum\nolimits_{k = n}^\infty {f(k)} }}{{\frac{{(\log n)^2 }}{n}}} \to 1.
$$</p>
|
135,012 | <p>How to prove (or to disprove) that all the roots of the polynomial of degree $n$ $$\sum_{k=0}^{k=n}(2k+1)x^k$$ belong to the disk $\{z:|z|<1\}?$ Numerical calculations confirm that, but I don't see any approach to a proof of so simply formulated statement. It would be useful in connection with an irreducibility problem. </p>
| Emil Jeřábek | 12,705 | <p>Let $f$ denote your polynomial. Roots of the polynomial
$$g(x)=\sum_{k=0}^nx^{2k+1}=x\frac{x^{2(n+1)}-1}{x^2-1}$$
are $0$ and roots of unity. By the <a href="http://en.wikipedia.org/wiki/Gauss%E2%80%93Lucas_theorem">Gauss–Lucas theorem</a>, the roots of $g'(x)=f(x^2)$ lie in their convex hull, and a fortiori in the disk $\{z:|z|\le1\}$. In order to get a strict inequality, it suffices to show that $g$ is square-free.</p>
|
123,202 | <blockquote>
<p>Let $X,Y$ be vectors in $\mathbb{C}^n$, and assume that $X\ne0$. Prove
that there is a symmetric matrix $B$ such that $BX=Y$.</p>
</blockquote>
<p>This is an exercise from a chapter about bilinear forms. So the intended solution should be somehow related to it.</p>
<p>Pre-multiplying both sides by $Y^t$, we get $Y^tBX=Y^tY$. The left hand side is a bilinear form $\langle Y,X\rangle $ with $B$ as the matrix of the form with respect to the standard basis. Am I correct here?</p>
<p>If so, then it suffices to find a bilinear form $\langle\cdot,\cdot\rangle\colon\mathbb{C}^n\times\mathbb{C}^n\rightarrow\mathbb{C}$ such that $\langle Y,X\rangle=Y^tY$. If $Y=0$, any bilinear form will do, because $\langle0,X\rangle=0\langle 0,X\rangle =0$ by linearity in the first variable. If $Y\ne0$, it suffices to find a bilinear form such that $\langle Y,X\rangle$ is nonzero, then we can multiply by the appropriate factor. This should be very near to a complete solution, but I can't figure out the rest.</p>
<p><em><strong>Edit</em></strong>: Okay, my approach seems to be completely wrong. Using Phira's hint, I think I managed to make a complete proof.</p>
<p>Choose an orthonormal basis $(v_1,\ldots,v_n)$ such that $v_1=\frac{X}{\|X\|}$, which can be done by Gram-Schmidt process. Let $P$ be the $n\times n$ matrix whose $i$-th column is the vector $v_i$. Then $P$ is orthogonal. Let $P^{-1}Y=(a_1,\ldots,a_n)^t$. Choose such that the first column and the first row is the vector $\frac1{\|X\|}(a_1,\ldots,a_n)$, and 0 everywhere else. Clearly $M$ is symmetric and it's easy to check that $(PMP^{-1})X=Y$. So the desired matrix is $B=PMP^{-1}$, which is symmetric because $P$ is orthogonal. $\Box$</p>
<p>However, this solution does not make use of bilinear forms. So there might be a simpler way.</p>
| Vedran Šego | 78,926 | <p>Here is a short, constructive proof:</p>
<ol>
<li><p>If $y = 0$, then $B = 0$.</p></li>
<li><p>If $y^Tx \ne 0$, then $B = \frac{1}{y^Tx} yy^T$.</p></li>
<li><p>If $y^Tx = 0$, then $B = \frac{\|y\|}{\|x\|} H$, where $H$ is a <a href="http://en.wikipedia.org/wiki/Householder_transformation" rel="nofollow noreferrer">Householder transformation</a> that maps $x/\|x\|$ to $y/\|y\|$ (it always exists, by <a href="https://math.stackexchange.com/a/335703/78926">this answer</a>).</p></li>
</ol>
<p>The last one can always be done (for $x \ne 0$), but I find the cases 1 and 2 more straightforward, so I decided to include them as well.</p>
|
1,085,702 | <p>It's said that a computer program "prints" a set <span class="math-container">$A$</span> (<span class="math-container">$A \subseteq \mathbb N$</span>, positive integers.) if it prints every element of <span class="math-container">$A$</span> in ascending order (even if <span class="math-container">$A$</span> is infinite.). For example, the program can "print":</p>
<ol>
<li>All the prime numbers.</li>
<li>All the even numbers from <span class="math-container">$5$</span> to <span class="math-container">$100$</span>.</li>
<li>Numbers including "<span class="math-container">$7$</span>" in them.</li>
</ol>
<p>Prove there is a set that no computer program can print.</p>
<p>I guess it has something to do with an algorithm meant to manipulate or confuse the program, or to create a paradox, but I can't find an example to prove this. Any help?</p>
<p>Guys, this was given to me by my Set Theory professor, meaning, this question does not regard computer but regards an algorithm that cannot exhaust <span class="math-container">$\mathcal{P}(\mathbb{N})$</span>. Everything you say about computer or number of programs do not really help me with this... The proof has to contain Set Theory claims, and I probably have to find a set with terms that will make it impossible for the program to print. I am not saying your proofs including computing are not good, on the contrary, I suppose they are wonderful, but I don't really understand them nor do I need to use them, for it's about sets.</p>
| Jihad | 191,049 | <p>There are countably many programs but the number of subsets of $\mathbb{N}$ is uncountable.</p>
|
1,085,702 | <p>It's said that a computer program "prints" a set <span class="math-container">$A$</span> (<span class="math-container">$A \subseteq \mathbb N$</span>, positive integers.) if it prints every element of <span class="math-container">$A$</span> in ascending order (even if <span class="math-container">$A$</span> is infinite.). For example, the program can "print":</p>
<ol>
<li>All the prime numbers.</li>
<li>All the even numbers from <span class="math-container">$5$</span> to <span class="math-container">$100$</span>.</li>
<li>Numbers including "<span class="math-container">$7$</span>" in them.</li>
</ol>
<p>Prove there is a set that no computer program can print.</p>
<p>I guess it has something to do with an algorithm meant to manipulate or confuse the program, or to create a paradox, but I can't find an example to prove this. Any help?</p>
<p>Guys, this was given to me by my Set Theory professor, meaning, this question does not regard computer but regards an algorithm that cannot exhaust <span class="math-container">$\mathcal{P}(\mathbb{N})$</span>. Everything you say about computer or number of programs do not really help me with this... The proof has to contain Set Theory claims, and I probably have to find a set with terms that will make it impossible for the program to print. I am not saying your proofs including computing are not good, on the contrary, I suppose they are wonderful, but I don't really understand them nor do I need to use them, for it's about sets.</p>
| QuestionC | 187,599 | <p>There is a really elegant proof for this that requires virtually no math background at all.</p>
<p>All computer programs are a finite sequence of bytes, which is just a number in base 256. So each computer program can be represented as a unique natural number. <strong>This statement is elaborated in detail below the divide.</strong></p>
<ul>
<li>If a computer program prints its own number, then that program is <em>blue</em> and its number is <em>blue</em>.</li>
<li>If a program does not print its own number, then that program is <em>red</em> and its number is <em>red</em>.</li>
</ul>
<p>The set of <em>red</em> numbers is a subset of natural numbers. Now write a program that prints this set. Is that program red or blue?</p>
<ul>
<li>Suppose the program is <em>red</em>. Then it must print its own number as part of the set, but this causes it to be a <em>blue</em> program.</li>
<li>Suppose the program is <em>blue</em>. Then it must print its own number as a <em>blue</em> program, but this causes its output to not be the set of <em>red</em> numbers.</li>
</ul>
<p>This program is impossible! Therefore, there must exist at least one set which programs can not print.</p>
<p>This is how I learned the Cantor set/subset inequality. I couldn't find a better link, but I got it from a Martin Gardner book.</p>
<hr>
<p><em>Addendum</em></p>
<p>Let's get into the statement</p>
<blockquote>
<p>each computer program can be represented as a unique natural number</p>
</blockquote>
<p>This will involve some math, particularly working in binary. We are going to create a <a href="https://en.wikipedia.org/wiki/G%C3%B6del_numbering">Gödel numbering</a> for computer programs.</p>
<p>Assume every program can be represented as a finite string of 0's and 1's. This accurately describes real world programs and the input to a <a href="https://en.wikipedia.org/wiki/Universal_Turing_machine">Universal Turing Machine</a>.</p>
<p>So any given program X is x<sub>1</sub>x<sub>2</sub>...x<sub>N</sub> where x<sub>i</sub> is 0 or 1 for each i.</p>
<p>Let us define <code>Num(X)</code> as the binary number 1x<sub>1</sub>x<sub>2</sub>...x<sub>N</sub>. <code>Num(X)</code> of the program X = '0110' would be '10110' in binary, which is 22. </p>
<p><code>Num(X)</code> gives each program a unique natural number because...</p>
<ul>
<li>If two programs have differing length, then the longer program has a greater <code>Num()</code> than the shorter.</li>
<li>If two programs have identical length but differ on some bits, then the binary representation is different for that bit, so <code>Num()</code> will differ between the programs.</li>
<li>In any other case, the two programs have identical length and identical bits, meaning they are identical programs.</li>
</ul>
<p>Now we can get on to the set theory part of the proof.</p>
<p>That proof assumed binary, but as long as your program is finitely describable in any language which uses a finite number of characters (like English!) you can similarly map it to a unique natural number. This is interesting because it extends the proof from programs to any describable concept (like genie wishes!).</p>
|
1,274,717 | <p>Say that $V$ is a finite dimensional vector space over a field and and $f : V \to V$ a linear map. There is an integer $i$ such that $\text{ker}(f^n) = \text{ker}(f^{n+1})$ for all $n \geq i$. You see that by noting that $\text{ker}(f^n) \subseteq \text{ker}(f^{n+1})$ for all $n$ and since $V$ is finite dimensional, they must stabilize at some point.</p>
<p>I am having trouble seeing that for $n \leq s$ where $s$ is the least integer $i$ above, that $\text{ker}(f^n) \subsetneq \text{ker}(f^{n+1})$. How can I see the proper containment?</p>
<p>I am pretty sure that the proof goes like this: If $n < s$ and $\text{ker}(f^{n-1}) = \text{ker}(f^{n})$, then $\text{ker}(f^j) = \text{ker}(f^{j+1})$ for all $j \geq n$ contradicting that $s$ is the least such integer. But how do I show this? Thanks for your help.</p>
| Pierrev | 235,744 | <p>You are right about your proof. How to do it ? </p>
<p>It's only using rank–nullity theorem many times.</p>
<p>I take your notation :</p>
<p>If $ker(f^{n-1})=ker(f^n)$, using rank-nullity theorem : $$dim(im(f^{n}))=dim(V)-dim(ker(f^n))=dim(V)-dim(ker(f^{n-1}))=dim(im(f^{n-1}))$$and the fact that $im(f^{n})$ is included in $im(f^{n-1})$ it shows that $im(f^{n-1})=im(f^{n})$.</p>
<p>Then knowing that $im(f^{n+1})$ is included in $im(f^{n})$, let $x$ be in $im(f^{n})$.
Then there exists $y\in V$ such that $x=f^n(y)=f(f^{n-1}(y))$ and $f^{n-1}(y)\in im(f^{n-1})=im(f^{n})$ so there exists $z\in V$ such that $f^{n-1}(y)=f^n(z)$. Then $x=f(f^n(z))\in im(f^{n+1})$. </p>
<p>Having both inclusions : $im(f^{n+1})=im(f^{n})$. Now using rank-nullity theorem : by inclusion and dimension equality $$\ker(f^{n+1})=\ker(f^{n}).$$</p>
<p>This is the initialization of the induction and in the exact same way you can show the equality of the kernels until and reaching $s$.</p>
<p>Am I clear? I can detail more.</p>
|
4,438,512 | <p>Recently I learned about dividing and forking of formula / partial types:</p>
<p>We say that <span class="math-container">$φ(x,b)$</span> divides over <span class="math-container">$C$</span> (where <span class="math-container">$b$</span> in the monster and <span class="math-container">$C$</span> is small) if there exists an indiscernible sequence <span class="math-container">$\{b_i\}_{i\in\omega}$</span> over <span class="math-container">$C$</span> of elements with the type <span class="math-container">$\operatorname{tp}(b/C)$</span> such that <span class="math-container">$\{φ(x,b_i)\}$</span> is inconsistent.</p>
<p>We say that <span class="math-container">$φ(x,b)$</span> fork over <span class="math-container">$C$</span> if it implies finite disjunction of dividing formulas over <span class="math-container">$C$</span>.</p>
<hr />
<p>I am trying to get a better intuition about those definitions, especially about forking.</p>
<p>From my understanding, <span class="math-container">$φ(x,b)$</span> divides over <span class="math-container">$C$</span> if <span class="math-container">$φ(C,b)$</span> (i.e. the set <span class="math-container">$φ$</span> defines with <span class="math-container">$b$</span>) is not "big" in the sense that for every <span class="math-container">$A\subseteq ω$</span>, <span class="math-container">$|A|=k$</span> (for some fixed <span class="math-container">$k$</span>) we have <span class="math-container">$\bigcap\limits_{i∈A} φ(C,b_i)=∅$</span></p>
<p>This view is reinforced when we see that the forking formulas are basically the ideal generated from the dividing formulas (I mainly worked in set theory with filters which are "defining big sets", so my intuition about their dual, the ideals, is "defining the small sets").</p>
<p>But this view doesn't feel that good to me, it is somewhat lacking: specifically, what is the role of the second parameter (<span class="math-container">$b,b_i$</span>) here, why do we look at what happens to the defined subset when we change it. And why are we fixing <span class="math-container">$C$</span>? Isn't the first parameter is usually the parameter we move around?</p>
<p>Further more, I feel like this view falls completely for forks: the set of forking formulas is not always a proper ideal, even over the empty set (e.g. in <span class="math-container">$(X,\mathcal P(X),∈)$</span> or circular order, in which <span class="math-container">$x=x$</span> forks over <span class="math-container">$∅$</span>)</p>
<p>My understanding on forking/dividing partial types is basically the same as my understanding of forking.</p>
<hr />
<hr />
<p>So I am guessing that my question is: is there a standard way to think about dividing and forking formulas and partial types?</p>
<p>Also, I focused a lot on the study of set theory/formal logic, and kind of neglected abstract algebra for some time (which comes to bites me in the butt now...), so if there is a view that appeals to set theorists and relies less on algebraic intuition I would love to hear about it.</p>
| Alex Kruckman | 7,062 | <p>Some years ago I wrote an answer about the motivation for the definitions of dividing and forking. You may find it helpful: <a href="https://math.stackexchange.com/a/1613033/7062">https://math.stackexchange.com/a/1613033/7062</a></p>
<p>But I'll add to what I wrote there, addressing some specific points in your question.</p>
<p>You wrote:</p>
<blockquote>
<p>From my understanding, <span class="math-container">$\varphi(x,b)$</span> divides over <span class="math-container">$C$</span> if <span class="math-container">$\varphi(C,b)$</span> (i.e. the set <span class="math-container">$\varphi$</span> defines with <span class="math-container">$b$</span>) is not "big".</p>
</blockquote>
<p>That's exactly the right idea, except that we shouldn't think about the set <span class="math-container">$\varphi(C,b)$</span>, but rather the set <span class="math-container">$\varphi(\mathscr{U},b)$</span>, where <span class="math-container">$\mathscr{U}\models T$</span> is the monster model. Often <span class="math-container">$\varphi(C,b)$</span> might be empty (as small as possible!) even though <span class="math-container">$\varphi(x,b)$</span> does not divide.</p>
<p>In fact, to start getting intuition for these notions, it's probably best to think about the case <span class="math-container">$C = \varnothing$</span>.</p>
<p>If I give you a definable set in <span class="math-container">$\mathscr{U}$</span>, say defined by the formula <span class="math-container">$\varphi(x,b)$</span>, I can also find "copies" of this definable set that "look the same". What makes a copy "look the same"? Well, it should be defined by the same formula, using parameters that "look the same": <span class="math-container">$\varphi(x,b')$</span>, where <span class="math-container">$\mathrm{tp}(b) = \mathrm{tp}(b')$</span>. By the strong homogeneity of the monster model, <span class="math-container">$\mathrm{tp}(b) = \mathrm{tp}(b')$</span> if and only if there is an automorphism <span class="math-container">$\sigma\in \mathrm{Aut}(\mathscr{U})$</span> such that <span class="math-container">$\sigma(b) = b'$</span>, and in that case <span class="math-container">$\sigma(\varphi(\mathscr{U},b)) = \varphi(\mathscr{U},b')$</span>.</p>
<p>Intuitively, we say a formula divides (over <span class="math-container">$\varnothing$</span>), if it has a lot of copies with very little overlap. In the answer linked above, I tried to give some intuition for why this is a good notion of "smallness".</p>
<p>Now it's often useful to draw finer distinctions for when two tuples "look the same" than looking at their type over <span class="math-container">$\varnothing$</span>. We could instead fix a set <span class="math-container">$C$</span> in the background and insist that a "copy" of <span class="math-container">$\varphi(\mathcal{U},b)$</span> should be defined by <span class="math-container">$\varphi(x,b')$</span>, where <span class="math-container">$\mathrm{tp}(b/C) = \mathrm{tp}(b'/C)$</span>. Equivalently, these definable sets are conjugate by an automorphism of <span class="math-container">$\mathscr{U}$</span> which fixes <span class="math-container">$C$</span> pointwise. This is the role of "over <span class="math-container">$C$</span>" in the definition of "dividing over <span class="math-container">$C$</span>".</p>
<p>Whenever you think about dividing or forking over <span class="math-container">$C$</span>, you could just as well think about dividing or forking over <span class="math-container">$\varnothing$</span>, in the expanded language where we name every element of <span class="math-container">$C$</span> by a constant.</p>
<blockquote>
<p>This view is reinforced when we see that the forking formulas are basically the ideal generated from the dividing formulas (I mainly worked in set theory with filters which are "defining big sets", so my intuition about their dual, the ideals, is "defining the small sets").</p>
</blockquote>
<p>That's exactly right: once we have a basic notion of "smallness" (dividing), it's natural to want the small sets to form an ideal in the Boolean algebra of definable sets. The forking formulas are exactly the ones which are "small" in the sense that they are in the ideal generated by the dividing formulas.</p>
<p>From a technical viewpoint, the fact that the forking formulas form an ideal lets us take any non-forking partial type (that is, any filter in the Boolean algebra of definable sets which does not meet the forking ideal) and extend it to a non-forking complete type (an ultrafilter which does not meet the forking ideal). In practice, when <span class="math-container">$C\subseteq B\subseteq A$</span>, this is used to take a type <span class="math-container">$p(x)\in S_x(B)$</span> which does not fork over <span class="math-container">$C$</span> and extend it "generically" to a type in <span class="math-container">$S_x(A)$</span> which does not fork over <span class="math-container">$C$</span>. In other words if <span class="math-container">$p(x)$</span> doesn't fork over <span class="math-container">$C$</span>, we can extend it to any larger set of parameters without being "forced" into any definable sets which are "small" from the point of view of <span class="math-container">$C$</span>.</p>
<blockquote>
<p>I feel like this view falls completely for forks: the set of forking formulas is not always a proper ideal.</p>
</blockquote>
<p>Well, it's a common theme in mathematics that attempts to distinguish between large and small sets don't work in all situations. For example, from the point of view of Baire category, we introduce a basic notion of "smallness", namely nowhere denseness, and then we want the "small" sets to form a <span class="math-container">$\sigma$</span>-ideal, so we define meager sets to be countable unions of nowhere dense sets. This works great for some spaces (those with the property of Baire), but in other spaces we find that the entire space is meager, so this particular large/small distinction is useless.</p>
<p>It's quite an amazing phenomenon in model theory that the analogous situation is so rare: there are only a few natural examples where the formula <span class="math-container">$x = x$</span> forks, so that the forking ideal is the entire Boolean algebra of definable sets. If we restrict attention to simple theories, we can prove that this never happens, which is one reason why the theory of forking and dividing is so well-suited to the context of simple theories.</p>
|
2,135,228 | <p>Find</p>
<p>(a) $P\{A \cup B\}$</p>
<p>(b) $P\{A^c\}$</p>
<p>(c) $P\{A^c \cap B\}$</p>
<p>This is what I have right now:</p>
<p>(a) $P\{A \cup B\}=0.4+0.5=0.90$</p>
<p>(b) $P\{A^c\}= 1-0.4=0.60$</p>
<p>(c) $P\{A^c \cap B\}= (0.6)\cdot(0.5)=0.30$</p>
<p>Am I doing it correctly?</p>
| victoria | 412,473 | <p>You can't see how this property can be satisfied because with this definition, no, it can't be satisfied. You have just proved that this thing fails to be a vector space. That is how you do it.</p>
|
2,441,359 | <p>It's an example given in my book after monotone convergence theorem and dominated convergence theorem (without explanation) : </p>
<p>Find an equivalent of $$\int_0^{\pi/2}\dfrac {dx} {\sqrt{\sin^2(x)+\epsilon \cos^2(x)}}$$</p>
<p>when $\epsilon\to 0^{+}$.</p>
<p>Inspired of the theorems, I naturally think of the sequence $(\epsilon_n)$ that converges to $0$, and it is monotone. However, the limit (1/sin(x)) doesn't converge in the integral (the other examples converge to a finite number...). Would someone give a hint about how to deal with the divergent case?</p>
| user21820 | 21,820 | <p>The answer is "no" even in intuitionistic propositional logic, which is what I assume you mean by "constructive logic".
$
\def\imp{\Rightarrow}
\def\eq{\Leftrightarrow}
\def\from{\leftarrow}
$</p>
<p>Let $P \overset{def}\equiv ( A \lor \neg A ) \eq ( B \lor \neg B )$. Then $P$ is a classical tautology. But let $K$ be a <a href="https://math.stackexchange.com/a/1804379/21820">Kripke frame</a> with worlds $0 \to 1$ and $2 \to 3$ where $A$ holds at only $1$ while $B$ holds at only $3$. Then the implication $( A \lor \neg A ) \imp ( B \lor \neg B )$ is false in $K$ because at $2$ we have that $\neg A$ (which denotes "$A \to \bot$") holds but neither $B$ nor $\neg B$ hold. By symmetry, $( B \lor \neg B ) \imp ( A \lor \neg A )$ is false in $K$ too. Thus for atoms $A,B$ neither of the implications be proven.</p>
|
226,346 | <p>I have the three dimensional Laplacian <span class="math-container">$\nabla^2 T(x,y,z)=0$</span> representing temperature distribution in a cuboid shaped wall which is exposed to two fluids flowing perpendicular to each other on either of the <span class="math-container">$z$</span> faces i.e. at <span class="math-container">$z=0$</span> (ABCD) and <span class="math-container">$z=w$</span> (EFGH). Rest all the faces are insulated i.e. <span class="math-container">$x=0,L$</span> and <span class="math-container">$y=0,l$</span>. The following figure depicts the situation.<a href="https://i.stack.imgur.com/T4kKK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T4kKK.png" alt="enter image description here" /></a></p>
<p>The boundary conditions on the lateral faces are therefore:</p>
<p><span class="math-container">$$-k\frac{\partial T(0,y,z)}{\partial x}=-k\frac{\partial T(L,y,z)}{\partial x}=-k\frac{\partial T(x,0,z)}{\partial y}=-k\frac{\partial T(x,l,z)}{\partial y}=0 \tag 1$$</span></p>
<p>The bc(s) on the two z-faces are robin type and of the following form:</p>
<p><span class="math-container">$$\frac{\partial T(x,y,0)}{\partial z} = p_c\bigg(T(x,y,0)-e^{-b_c y/l}\left[t_{ci} + \frac{b_c}{l}\int_0^y e^{b_c s/l}T(x,s,0)ds\right]\bigg) \tag 2$$</span></p>
<p><span class="math-container">$$\frac{\partial T(x,y,w)}{\partial z} = p_h\bigg(e^{-b_h x/L}\left[t_{hi} + \frac{b_h}{L}\int_0^x e^{b_h s/L}T(x,s,w)ds\right]-T(x,y,w)\bigg) \tag 3$$</span></p>
<p><span class="math-container">$t_{hi}, t_{ci}, b_h, b_c, p_h, p_c, k$</span> are all constants <span class="math-container">$>0$</span>.</p>
<p>I have two questions:</p>
<p><strong>(1)</strong> With the insulated conditions mentioned in <span class="math-container">$(1)$</span> does a solution exist for this system?</p>
<p><strong>(2)</strong> Can someone help in solving this analytically ?
I tried to solve this using the following approach (separation of variables) but encountered the results which I describe below (in short I attain a <em>trivial solution</em>):</p>
<p>I will include the codes for help:</p>
<pre><code>T[x_, y_, z_] = (C1*E^(γ z) + C2 E^(-γ z))*
Cos[n π x/L]*Cos[m π y/l] (*Preliminary T based on homogeneous Neumann x,y faces *)
tc[x_, y_] =
E^(-bc*y/l)*(tci + (bc/l)*
Integrate[E^(bc*s/l)*T[x, s, 0], {s, 0, y}]);
bc1 = (D[T[x, y, z], z] /. z -> 0) == pc (T[x, y, 0] - tc[x, y]);
ortheq1 =
Integrate[(bc1[[1]] - bc1[[2]])*Cos[n π x/L]*
Cos[m π y/l], {x, 0, L}, {y, 0, l},
Assumptions -> {L > 0, l > 0, bc > 0, pc > 0, tci > 0,
n ∈ Integers && n > 0,
m ∈ Integers && m > 0}] == 0 // Simplify
th[x_, y_] =
E^(-bh*x/L)*(thi + (bh/L)*
Integrate[E^(bh*s/L)*T[s, y, w], {s, 0, x}]);
bc2 = (D[T[x, y, z], z] /. z -> w) == ph (th[x, y] - T[x, y, w]);
ortheq2 =
Integrate[(bc2[[1]] - bc2[[2]])*Cos[n π x/L]*
Cos[m π y/l], {x, 0, L}, {y, 0, l},
Assumptions -> {L > 0, l > 0, bc > 0, pc > 0, tci > 0,
n ∈ Integers && n > 0,
m ∈ Integers && m > 0}] == 0 // Simplify
soln = Solve[{ortheq1, ortheq2}, {C1, C2}];
CC1 = C1 /. soln[[1, 1]];
CC2 = C2 /. soln[[1, 2]];
expression1 := CC1;
c1[n_, m_, L_, l_, bc_, pc_, tci_, bh_, ph_, thi_, w_] :=
Evaluate[expression1];
expression2 := CC2;
c2[n_, m_, L_, l_, bc_, pc_, tci_, bh_, ph_, thi_, w_] :=
Evaluate[expression2];
γ1[n_, m_] := Sqrt[(n π/L)^2 + (m π/l)^2];
</code></pre>
<p>I have used <code>Cos[n π x/L]*Cos[m π y/l]</code> considering the homogeneous Neumann condition on the lateral faces i.e. <span class="math-container">$x$</span> and <span class="math-container">$y$</span> faces.</p>
<p>Declaring some constants and then carrying out the summation:</p>
<pre><code>m0 = 30; n0 = 30;
L = 0.025; l = 0.025; w = 0.003; bh = 0.433; bc = 0.433; ph = 65.24; \
pc = 65.24;
thi = 120; tci = 30;
Vn = Sum[(c1[n, m, L, l, bc, pc, tci, bh, ph, thi, w]*
E^(γ1[n, m]*z) +
c2[n, m, L, l, bc, pc, tci, bh, ph, thi, w]*
E^(-γ1[n, m]*z))*Cos[n π x/L]*Cos[m π y/l], {n,
1, n0}, {m, 1, m0}];
</code></pre>
<p>On executing an plotting at <code>z=0</code> using <code>Plot3D[Vn /. z -> 0, {x, 0, L}, {y, 0, l}]</code> I get the following:</p>
<p><a href="https://i.stack.imgur.com/HxIiR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HxIiR.jpg" alt="enter image description here" /></a></p>
<p>which is basically 0. On looking further I found that the constants <code>c1, c2</code> evaluate to <code>0</code> for any value of <code>n,m</code>.</p>
<p><strong>More specifically I would like to know if some limiting solution could be developed to circumvent the problem of the constants evaluating to zero</strong></p>
<hr />
<p><strong>Origins of the b.c.</strong><span class="math-container">$2,3$</span></p>
<p>Actual bc(s): <span class="math-container">$$\frac{\partial T(x,y,0)}{\partial z}=p_c (T(x,y,0)-t_c) \tag 4$$</span>
<span class="math-container">$$\frac{\partial T(x,y,w)}{\partial z}=p_h (t_h-T(x,y,w))\tag 5$$</span></p>
<p>where <span class="math-container">$t_h,t_c$</span> are defined in the equation:</p>
<p><span class="math-container">$$\frac{\partial t_c}{\partial y}+\frac{b_c}{l}(t_c-T(x,y,0))=0 \tag 6$$</span>
<span class="math-container">$$\frac{\partial t_h}{\partial x}+\frac{b_h}{L}(t_h-T(x,y,0))=0 \tag 7$$</span></p>
<p><span class="math-container">$$t_h=e^{-b_h x/L}\bigg(t_{hi} + \frac{b_h}{L}\int_0^x e^{b_h s/L}T(x,s,w)ds\bigg) \tag 8$$</span></p>
<p><span class="math-container">$$t_c=e^{-b_c y/l}\bigg(t_{ci} + \frac{b_c}{l}\int_0^y e^{b_c s/l}T(x,s,0)ds\bigg) \tag 9$$</span></p>
<p>It is known that <span class="math-container">$t_h(x=0)=t_{hi}$</span> and <span class="math-container">$t_c(y=0)=t_{ci}$</span>. I had solved <span class="math-container">$6,7$</span> using the method of integrating factors and used the given conditions to reach <span class="math-container">$8,9$</span> which were then substituted into the original b.c.(s) <span class="math-container">$4,5$</span> to reach <span class="math-container">$2,3$</span>.</p>
<hr />
<p><strong>Non-dimensional formulation</strong>
The non-dimensional version of the problem can be written as:</p>
<p>(In this section <span class="math-container">$x,y,z$</span> are non-dimensional; <span class="math-container">$x=x'/L,y=y'/l,z=z'/w, \theta=\frac{t-t_{ci}}{t_{hi}-t_{ci}}$</span>)</p>
<p>Also, <span class="math-container">$\beta_h=h_h (lL)/C_h, \beta_c=h_c (lL)/C_c$</span> (However, this information might not be needed)</p>
<p><span class="math-container">$$\lambda_h \frac{\partial^2 \theta_w}{\partial x^2}+\lambda_c \frac{\partial^2 \theta_w}{\partial y^2}+\lambda_z \frac{\partial^2 \theta_w}{\partial z^2}=0 \tag A$$</span></p>
<p>In <span class="math-container">$(A)$</span> <span class="math-container">$\lambda_h=1/L^2, \lambda_c=1/l^2, \lambda_z=1/w^2$</span></p>
<p><span class="math-container">$$\frac{\partial \theta_h}{\partial x}+\beta_h (\theta_h-\theta_w) = 0 \tag B$$</span></p>
<p><span class="math-container">$$\frac{\partial \theta_c}{\partial y} + \beta_c (\theta_c-\theta_w) = 0 \tag C$$</span></p>
<p>The z-boundary condition then becomes:</p>
<p><span class="math-container">$$\frac{\partial \theta_w(x,y,0)}{\partial z}=r_c (\theta_w(x,y,0)-\theta_c) \tag D$$</span>
<span class="math-container">$$\frac{\partial \theta_w(x,y,w)}{\partial z}=r_h (\theta_h-\theta_w(x,y,w))\tag E$$</span></p>
<p><span class="math-container">$$\theta_h(0,y)=1, \theta_c(x,0)=0$$</span></p>
<p>Here <span class="math-container">$r_h,r_c$</span> are non-dimensional quantities (<span class="math-container">$r_c=\frac{h_c w}{k}, r_h=\frac{h_h w}{k}$</span>).</p>
| bbgodfrey | 1,063 | <p><strong>Problem Statement</strong></p>
<p>For notational simplicity, use the non-dimensional formulation described near the end of the question. (Doing so also facilitates comparison with results of a <a href="https://mathematica.stackexchange.com/a/235979/1063">2D approximation</a> solved earlier.) The PDE is given by</p>
<pre><code>λh D[θw[x, y, z], x, x] + λc D[θw[x, y, z], y, y] + λz D[θw[x, y, z], z, z] == 0
</code></pre>
<p>over the domain <code>{x, 0, 1}, {y, 0, 1}, {z, 0, 1}</code>. Normal derivatives vanish on the <code>x</code> and <code>y</code> boundaries. Conditions on the <code>z</code> boundaries are given by</p>
<pre><code>(D[θw[x, y, z], z] + rh (θw[x, y, z] - θwh[x, y]) == 0) /. z -> 1
(D[θw[x, y, z], z] - rh (θw[x, y, z] - θwc[x, y]) == 0) /. z -> 0
</code></pre>
<p>with <code>θwc</code> and <code>θwh</code> specified by</p>
<pre><code>D[θwh[x, y], x] + bh (θw[x, y, 1] - θwh[x, y]) == 0
θwh[0, y] == 1
D[θwc[x, y], y] + bc (θw[x, y, 0] - θwh[x, y]) == 0
θwc[x, 0] = 0
</code></pre>
<p>Although the solution of the PDE itself can be expressed as a sum of trigonometric functions, the <code>z</code> boundary conditions couple what otherwise would be separable eigenfunctions.</p>
<p><strong>Coupling Coefficients</strong></p>
<p>The coupling coefficients in question are given by</p>
<pre><code>DSolveValue[{D[θc[y], y] + b (θc[y] - 1) == 0, θc[0] == 0}, θc[y], y] // Simplify
a00 = Simplify[Integrate[% , {y, 0, 1}]]
an0 = Simplify[Integrate[%% 2 Cos[n π y], {y, 0, 1}], Assumptions -> n ∈ Integers]
(* 1 - E^(-b y) *)
(* (-1 + b + E^-b)/b *)
(* -((2 b E^-b ((-1)^(1 + n) + E^b))/(b^2 + n^2 π^2)) *)
DSolveValue[{D[θc[y], y] + b (θc[y] - Cos[m Pi y]) == 0, θc[0] == 0}, θc[y], y] // Simplify
a0m = Simplify[Integrate[%, {y, 0, 1}], Assumptions -> m ∈ Integers]
amm = Simplify[Integrate[%% 2 Cos[m π y], {y, 0, 1}], Assumptions -> m ∈ Integers]
anm = FullSimplify[Integrate[%%% 2 Cos[n π y], {y, 0, 1}], Assumptions -> (m | n) ∈ Integers]
(* (b (-b E^(-b y) + b Cos[m π y] + m π Sin[m π y]))/(b^2 + m^2 π^2) *)
(* (b ((-1)^(1 + m) + E^-b))/(b^2 + m^2) *) *)
(* (b^2 E^-b (b^2 E^b + 2 b ((-1)^m - E^b) + E^b m^2 π^2))/(b^2 + m^2 π^2)^2 *)
(* (E^-b (2 (-1)^n b^3 (m - n) (m + n) + 2 b E^b (n^2 (b^2 + m^2 π^2) + (-1)^(1 + m + n)
m^2 (b^2 + n^2 π^2))))/((m - n) (m + n) (b^2 + m^2 π^2) (b^2 + n^2 π^2)) *)
a[nn_?IntegerQ, mm_?IntegerQ] := Which[nn == 0 && mm == 0, a00, mm == 0, an0, nn == 0, a0m,
nn == mm, amm, True, anm] /. {n -> nn, m -> mm}
</code></pre>
<p><strong>General Solution</strong></p>
<p>Express the solution as a sum of eigenfunctions in the absence of the <code>z</code> boundary conditions.</p>
<pre><code>λh D[θw[x, y, z], x, x] + λc D[θw[x, y, z], y, y] + λz D[θw[x, y, z], z, z];
Simplify[(% /. θw -> Function[{x, y, z}, Cos[nh Pi x] Cos[nc Pi y] θwz[z]])/
(Cos[nh Pi x] Cos[nc Pi y])] /. π^2 (nc^2 λc + nh^2 λh) -> k[nh, nc]^2 λz
Flatten@DSolveValue[% == 0, θwz[z], z] /. {C[1] -> c1[nh, nc], C[2] -> c2[nh, nc]}
(* -λz k[nh, nc]^2 θwz[z] + λz (θwz''[z] *)
(* E^(z k[nh, nc]) c1[nh, nc] + E^(-z k[nh, nc]) c2[nh, nc] *)
</code></pre>
<p>as expected. Note that <code>k[nh, nc] = Sqrt[π^2 (nc^2 λc + nh^2 λh)/λz]</code> has been introduced for notational simplicity. Although the result above for <code>θwz</code> is correct, rearranging the terms a bit is helpful in what follows.</p>
<pre><code>sz = c2[nh, nc] Sinh[k[nh, nc] z]/Cosh[k[nh, nc]] +
c1[nh, nc] Sinh[k[nh, nc] (1 - z)]/Sinh[k[nh, nc]];
</code></pre>
<p>because <code>sz /. z -> 0</code>, needed in the <code>z = 0</code> boundary condition, reduces to <code>c1[nh, nc]</code>. Next, use that boundary condition to eliminate <code>c2[nh, nc]</code>.</p>
<pre><code>sθc[nh_?IntegerQ, nc_?IntegerQ] := Sum[a[nc, m] c1[nh, m], {m, 0, maxc}] /. b -> bc;
(D[sz, z] == rc (sz - sθc[nh, nc])) /. z -> 0;
Solve[%, c2[nh, nc]] // Flatten // Apart;
sz1 = Simplify[sz /. %] // Apart
(* (c1[nh, nc] (Cosh[z k[nh, nc]] k[nh, nc] + rc Sinh[z k[nh, nc]]))/k[nh, nc]
- (rc Sinh[z k[nh, nc]] sθc[nh, nc])/k[nh, nc] *)
</code></pre>
<p>Finally, use the <code>z = 1</code> boundary condition to produce a matrix equation for <code>c[nh, nc]</code>.</p>
<pre><code>szθh[nh_?IntegerQ, nc_?IntegerQ] := Evaluate[sz1 /. z -> 1]
sθh[nh_?IntegerQ, nc_?IntegerQ] := Evaluate[Sum[a[nh, m] szθh[m, nc], {m, 0, maxh}]]
eq = Simplify[(D[sz1, z] + rh (sz1 - sθh[nh, nc])) /. z -> 1] -
rh (DiscreteDelta[nh] - a[nh, 0]) DiscreteDelta[nc]
(* -rh DiscreteDelta[nc] (-a[nh, 0] + DiscreteDelta[nh]) + (1/k[nh, nc])
(c1[nh, nc] ((rc + rh) Cosh[k[nh, nc]] k[nh, nc] + rc rh Sinh[k[nh, nc]] +
k[nh, nc]^2 Sinh[k[nh, nc]]) - rc rh Sinh[k[nh, nc]] sθc[nh, nc] -
k[nh, nc] (rc Cosh[k[nh, nc]] sθc[nh, nc] + rh sθh[nh, nc])) *)
</code></pre>
<p>The source term <code>rh (DiscreteDelta[nh] - a[nh, 0]) DiscreteDelta[nc]</code> arises from <code>θwh[0, y] == 1</code> instead of equaling zero.</p>
<p>Specific solution for Parameters Set to Unity.</p>
<pre><code>maxh = 3; maxc = 3; λh = 1; λc = 1; λz = 1; bh = 1; bc = 1; rh = 1; rc = 1;
ks = Flatten@Table[k[nh, nc] -> Sqrt[π^2 (nc^2 λc + nh^2 λh)/λz],
{nh, 0, maxh}, {nc, 0, maxc}]
eql = N[Collect[Flatten@Table[eq /. Sinh[k[0, 0]] -> k[0, 0], {nh, 0, maxh}, {nc, 0, maxc}]
/. b -> bh, _c1, Simplify] /. ks] /. c1[z1_, z2_] :> Rationalize[c1[z1, z2]];
(* {k[0, 0] -> 0, k[0, 1] -> π, k[0, 2] -> 2 π, k[0, 3] -> 3 π, k[1, 0] -> π,
k[1, 1] -> Sqrt[2] π, k[1, 2] -> Sqrt[5] π, k[1, 3] -> Sqrt[10] π, k[2, 0] -> 2 π,
k[2, 1] -> Sqrt[5] π, k[2, 2] -> 2 Sqrt[2] π, k[2, 3] -> Sqrt[13] π,
k[3, 0] -> 3 π, k[3, 1] -> Sqrt[10] π, k[3, 2] -> Sqrt[13] π, k[3, 3] -> 3 Sqrt[2] π} *)
</code></pre>
<p><code>eql</code> the numericized version of <code>eq</code> is too long to reproduce here. And, trying to solve <code>eq</code> itself is far too slow. Next, compute the <code>c1</code> and from them the solution.</p>
<pre><code>Union@Cases[eql, _c1, Infinity];
coef = NSolve[Thread[eql == 0], %] // Flatten
sol = Total@Simplify[Flatten@Table[sz1 Cos[nh Pi x] Cos[nc Pi y] /.
Sinh[z k[0, 0]] -> z k[0, 0], {nh, 0, maxh}, {nc, 0, maxc}], Trig -> False] /. ks /. %;
(* {c1[0, 0] -> 0.3788, c1[0, 1] -> -0.0234913, c1[0, 2] -> -0.00123552,
c1[0, 3] -> -0.00109202, c1[1, 0] -> 0.00168554, c1[1, 1] -> -0.0000775391,
c1[1, 2] -> -5.40917*10^-6, c1[1, 3] -> -4.63996*10^-6, c1[2, 0] -> 4.19045*10^-6,
c1[2, 1] -> -1.24251*10^-7, c1[2, 2] -> -1.17696*10^-8, c1[2, 3] -> -1.02576*10^-8,
c1[3, 0] -> 1.65131*10^-7, c1[3, 1] -> -3.41814*10^-9, c1[3, 2] -> 3.86348*10^-10,
c1[3, 3] -> -3.48432*10^-10} *)
</code></pre>
<p>Here are several plots of the solution, beginning with a 3D contour plot.</p>
<pre><code>ContourPlot3D[sol, {x, 0, 1}, {y, 0, 1}, {z, 0, 1}, Contours -> {.4, .5, .6},
ContourStyle -> Opacity[0.75], PlotLegends -> Placed[Automatic, {.9, .9}],
ImageSize -> Large, AxesLabel -> {x, y, z}, LabelStyle -> {15, Bold, Black}]
</code></pre>
<p><a href="https://i.stack.imgur.com/yxZiL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yxZiL.png" alt="enter image description here" /></a></p>
<p>Next are slices through the solution at the ends and mid-point in <code>z</code>. The second, at <code>z = 1/2</code>, is similar to the seventh plot in the <a href="https://mathematica.stackexchange.com/a/235979/1063">2D thin slab approximation</a>, even though the calculation here is for a cube.</p>
<pre><code>Plot3D[sol /. z -> 0, {x, 0, 1}, {y, 0, 1}, ImageSize -> Large,
AxesLabel -> {x, y, "θw(z=0)"}, LabelStyle -> {15, Bold, Black}]
</code></pre>
<p><a href="https://i.stack.imgur.com/fO0Qe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fO0Qe.png" alt="enter image description here" /></a></p>
<pre><code> Plot3D[sol /. z -> 1/2, {x, 0, 1}, {y, 0, 1}, ImageSize -> Large,
AxesLabel -> {x, y, "θw(z=0)"}, LabelStyle -> {15, Bold, Black}]
</code></pre>
<p><a href="https://i.stack.imgur.com/zsisG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zsisG.png" alt="enter image description here" /></a></p>
<pre><code> Plot3D[sol /. z -> 1, {x, 0, 1}, {y, 0, 1}, ImageSize -> Large,
AxesLabel -> {x, y, "θw(z=0)"}, LabelStyle -> {15, Bold, Black}]
</code></pre>
<p><a href="https://i.stack.imgur.com/dAnpW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dAnpW.png" alt="enter image description here" /></a></p>
<p>Finally, here are <code>θwc</code> and <code>θwh</code>, each computed in two distinct ways, by the expansion given above and by direct integration using the expansion of <code>θw</code>. (They differ in that the latter does not employ the <code>a</code> matrix.) The two methods agree very well except at the edges in <code>y</code> and <code>x</code>, respectively, where convergence of the cosine series is nonuniform. Increasing the number of modes reduces this modest disagreement but does not change <code>sol</code> by more than <code>10^-4</code>.</p>
<pre><code>Simplify[(sol - D[sol, z]/rc) /. z -> 0, Trig -> False];
DSolveValue[{θwc'[y] + bc (θwc[y] - sol /. z -> 0) == 0, θwc[0] == 0},
θwc[y], {y, 0, 1}] // Chop;
Plot3D[{%, %%}, {x, 0, 1}, {y, 0, 1}, ImageSize -> Large,
AxesLabel -> {x, y, θwc}, LabelStyle -> {15, Bold, Black}]
</code></pre>
<p><a href="https://i.stack.imgur.com/wFGZk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wFGZk.png" alt="enter image description here" /></a></p>
<pre><code>Simplify[(sol + D[sol, z]/rh) /. z -> 1, Trig -> False];
DSolveValue[{θwh'[x] + bh (θwh[x] - sol /. z -> 1) == 0, θwh[0] == 1},
θwh[x], {x, 0, 1}] // Chop;
Plot3D[{%, %%}, {x, 0, 1}, {y, 0, 1}, ImageSize -> Large,
AxesLabel -> {x, y, θwh}, LabelStyle -> {15, Bold, Black}]
</code></pre>
<p><a href="https://i.stack.imgur.com/qGHMw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qGHMw.png" alt="enter image description here" /></a></p>
<p>The computation shown here required only a few minutes on my PC.</p>
|
242,203 | <p>What's the derivative of the integral $$\int_1^x\sin(t) dt$$</p>
<p>Any ideas? I'm getting a little confused.</p>
| Amr | 29,267 | <p>Using the fundamental theorem of calculus we know that the answer is $\sin(x)$</p>
|
2,306,570 | <p>I'm new to this topic and trying to solve system of equations over the field $Z_{3}$:
$$\begin{array}{rcr} x+2z & = & 1 \\ y+2z & = & 2 \\ 2x+z & = & 1 \end{array}$$</p>
<p>I solved the system but I have roots:
$$x=1/3,
y=4/3, z=1/3$$ and it's probably not right. Can you help with this one?</p>
| Ahmed S. Attaalla | 229,023 | <p>If,</p>
<p>$$AX+B=CX$$</p>
<p>Then by existence of an additive inverse ,</p>
<p>$$AX+B-AX=CX-AX$$</p>
<p>And by commutativity and associative properties of matrix addition,</p>
<p>$$B=CX-AX$$</p>
<p>And by distributive property of matrix multiplication,</p>
<p>$$B=(C-A)X$$</p>
<p>If an inverse to $C-A$ exists, and it does in our case we may multiply by it to get,</p>
<p>$$(C-A)^{-1}B=(C-A)^{-1}(C-A)X=IX=X$$</p>
<p>In conclusion,</p>
<blockquote>
<p>$$X=(C-A)^{-1}B$$</p>
</blockquote>
<p>Where $C=\begin{pmatrix}-2&5\\-5&8\end{pmatrix}$, $A=\begin{pmatrix}-6&8\\2&3\end{pmatrix}$, and $B=\begin{pmatrix}-2&-2\\2&-2\end{pmatrix}$.</p>
<p>Hence,</p>
<blockquote class="spoiler">
<p> $X=\begin{pmatrix} 4&-3 \\ -7&5 \end{pmatrix}^{-1}\begin{pmatrix}-2&-2\\2&-2\end{pmatrix}=-\begin{pmatrix} 5&3 \\ 7&4 \end{pmatrix}\begin{pmatrix}-2&-2\\2&-2\end{pmatrix}= \begin{pmatrix} 4& 16\\ 6& 22 \end{pmatrix}$</p>
</blockquote>
|
2,306,570 | <p>I'm new to this topic and trying to solve system of equations over the field $Z_{3}$:
$$\begin{array}{rcr} x+2z & = & 1 \\ y+2z & = & 2 \\ 2x+z & = & 1 \end{array}$$</p>
<p>I solved the system but I have roots:
$$x=1/3,
y=4/3, z=1/3$$ and it's probably not right. Can you help with this one?</p>
| Bill T | 451,681 | <p>This is the easy way to understanding this question, however it needs lots of work.</p>
<p><a href="https://i.stack.imgur.com/Iv781.jpg" rel="nofollow noreferrer">enter image description here</a></p>
<p><a href="https://i.stack.imgur.com/rZaFj.jpg" rel="nofollow noreferrer">enter image description here</a></p>
|
2,563,402 | <p>For which values of $a$ does the system
$$x_1 + x_2 + x_3 = 1$$
$$x_1 + 2x_2 + ax_3 = 2$$
$$2x_1 + ax_2 + 4x_3 = a^2$$
have (i) a unique solution, (ii) no solution, (iii) infinitely many solutions? Where the system has infinitely many solutions, write the solutions in parametric form.</p>
<p>So I tried to row reduce the matrix and got up till this point:</p>
<p>$$\left[\begin{array}{ccc|c}1&1&1&1\\ 0&1&a-1&1\\ 0&a-2&2&a^2-2\end{array}\right]$$</p>
<p>But I'm a little confused on how to continue further. How exactly would I change the $a-2$ to a $0$ and $2$ to a $1$? </p>
<p>Any help?</p>
| Community | -1 | <p><strong>Hint:</strong></p>
<p>The discussion depends on the zeroes of the main determinant. Developing with the first column,</p>
<p>$$\Delta=2-(a-2)(a-1).$$</p>
<p>The rest is yours.</p>
|
1,732,526 | <p>I start by expanding the denominator and separating the real and imaginary but get stuck when deciding what my $u$ and $v$ should be.</p>
<p>Thanks.</p>
| Olivier Oloa | 118,798 | <p><strong>Hint</strong>. One may observe that, for $a \neq0$ and for $|z|<|a|$, we have</p>
<blockquote>
<p>$$
z \mapsto \frac{1}{a+z}=\sum_{n=0}^\infty\frac{(-1)^nz^n}{a^{n+1}} \tag1
$$ </p>
</blockquote>
<p>and the considered function is analytic over $|z|<|a|$.</p>
<p>Now, by partial a fraction decomposition, one may obtain</p>
<blockquote>
<p>$$
\frac{z^2+1}{(3z-1)(z-i+1)}= \frac{1}{3}-\frac{\frac{2}{5}-\frac{i}{5}}{1-i+z}+\frac{\frac{8}{45}+\frac{2i}{15}}{-1/3+z} \tag2
$$</p>
</blockquote>
<p>then one may conclude using $(1)$.</p>
|
34,294 | <p>Let $M$ be a 2-dimensional Riemannian manifold of non-positive curvature everywhere, of genus > 1. Let $\textbf{D} \subset \textbf{C}$ be the open unit disc in the complex plane, the universal cover of $M$. Let $\gamma \subset \textbf{D}$ be a curve representing a geodesic in $M$ which is entirely in a region of zero curvature. It seems to me, because the definition of geodesic is local, that unless $\gamma$ is tangent to some region $A \subset \textbf{D}$ of negative curvature, $\gamma$ will be a Euclidean line through $\textbf{D}$. Is this correct?</p>
<p>Secondly, does anybody have any references that I could peruse to learn how a geodesic $\gamma$ which <em>does</em> in fact pass tangent to some $A$ of negative curvature reacts to this region (will it turn <em>into</em> $A$, away from it, etc. and maybe some way of calculating the actual effect)?</p>
<p>Thank you. </p>
| Sergei Ivanov | 4,354 | <p>I am answering the question as clarified in your Aug 3 comment. No it is not always possible to identify the universal cover with $D$ so that all zero-curvature geodesics become straight lines.</p>
<p>Begin with a metric in a small region where some 9 zero-curvature geodesics intersect each other but violate the Desargues theorem. For example, begin with a standard Desargues configuration and introduce a bit of negative curvature so that one of the lines misses one of the intersection points. For each of these 9 geodesics, attach a long narrow planar strip to the boundary of the region: each end of the strip is attached to the place where the geodesics meets the boundary, so that the geodesic extended to the strip closes up (and has a neighborhood isometric to a straight cylinder which lies partly in the strip and partly in the original region).</p>
<p>Now we have a surface with boundary, and on this surface some 9 zero-curvature closed geodesics violate the Desargues theorem within a simply connected region. Fill the boundary components by surfaces of sufficiently large genus - so that they can be equipped with a nonpositively curved metric. Now we have a closed surface. In its universal cover, the lifts of the 9 geodesics still violate the Desargues theorem. Hence they cannot be represented by straight lines no matter how you identify the universal cover with $D$.</p>
|
3,059,676 | <blockquote>
<p>Why is the sum of all external angles in a convex polygon <span class="math-container">$360^\circ$</span>? </p>
</blockquote>
<p>From my understanding, for each vertex in a convex polygon, there exist exactly <span class="math-container">$2$</span> exterior angles corresponding to it, which are both equal, vertically opposite, and add up to <span class="math-container">$180^\circ$</span> with the interior angle. If we take as true that sum of interior angles in a triangle is <span class="math-container">$(n-2)180^\circ$</span> degrees, then <span class="math-container">$$\sum_i 2\cdot (180^\circ-\alpha_i) = n\cdot 360^\circ - (n-2)\cdot 360^\circ = 720^\circ.$$</span> Am I missing something here? </p>
| punk4me | 863,609 | <p>I recommend you look at Geometry by Jurgensen under the angles of a polygon section.</p>
<p>Draw a picture, it will help. Start out with pentagon to get some intuition. Now separate the inside of the polygon into non-overlapping triangles, and observe how you found the total sum of the interior angles of the polygon, and then of the sum of the exterior angles of it.</p>
<p>Basically, given a convex polygon, we can form a linear pair -- a pair of supplementary angles , at each vertex.</p>
<p>Taking all <span class="math-container">$n$</span> vertices of the polygon into account, we have <span class="math-container">$n \cdot 180 $</span> degrees.</p>
<p>The polygon can be partitioned/separated into exactly <span class="math-container">$n-2$</span> triangles.</p>
<p>So, the sum of all the interior angles of the polygon is <span class="math-container">$$(n-2) \cdot 180 \text{ degrees} $$</span></p>
<p>Hence, the sum of all the external angles is the difference</p>
<p><span class="math-container">$$ n \cdot 180 - (n-2) \cdot 180 = 2\cdot 180 = 360 \text{ degrees} $$</span></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.