qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
81,209 | <p>I feel a bit ashamed to ask the following question here. </p>
<blockquote>
<p>What is (actually, is there) Galois
theory for polynomials in
$n$-variables for $n\geq2$?</p>
</blockquote>
<p>I am preparing a large audience talk on Lie theory, and decided to start talking about symmetries and take Galois theory as a "baby" example. I know that Lie groups are somehow to differential equations what discrete groups are to algebraic equations. But I nevertheless would expect Lie (or algebraic) groups to appear naturally as higher dimensional analogs of Galois groups. </p>
<p>Namely, the Galois group $G_P$ of a polynomial $P(x)$ in one variable can be defined as the symmetry group of the equation $P(x)=0$ (very shortly, the subgroup of permutations of the solutions/roots that preserves any algebraic equation satisfied by them). </p>
<p>Then one of the great results of Galois theory is that $P(x)=0$ is solvable by radicals if and only if the group $G_P$ is solvable (meaning that its derived series reaches $\{1\}$). </p>
<p>I was wondering what is the analog of the story in higher dimension (i.e. for equations of the form $P(x_1,\dots,x_n)=0$. I would naively expect algebraic group to show up... </p>
<hr>
<p>I googled the main key words and found <a href="http://www.ucl.ac.uk/~ucahmki/sheffield.pdf">this presentation</a>: on the last slide it is written that </p>
<blockquote>
<p>the task at hand is to develop a
Galois theory of polynomials in two
variables</p>
</blockquote>
<p>This convinced me to anyway ask the question</p>
<hr>
<p><strong>EDIT: the first "idea" I had</strong></p>
<p>I first thought about the following strategy. Consider $P(x,y)=0$ as an polynomial equation in one variable $x$ with coefficients in the field $k(y)$ of rational functions in $y$, and consider its Galois group. But then we could do the opposite...what would happen?</p>
| KristianJS | 19,367 | <p>(This should really be a comment I think, but I'm not highly rated enough to leave one, so please bear with me)</p>
<p>A Galois Theoretic condition for a polynomial in two variables to be solvable by radicals is found in the following paper: <a href="http://arxiv.org/abs/math/0305226" rel="noreferrer">http://arxiv.org/abs/math/0305226</a>. It seems to indicate that something similar can be done for higher variables. Perhaps I'll ask Jochen next time I see him about this.</p>
|
139,232 | <p>Let $O$ be an operad in $\mathtt{SETS}$. Assume that $O(0)$ is empty and $O(1)$ only consists of the identity. Assume for simplicity that $O$ is monochromatic, i.e. we have no labels on the in/outputs. Assume also for simplicity that the operad is plain, i.e. neither symmetric nor braided. So the operads in question consist of a set of $n$-ary operations $O(n)$ for each $n\in\mathbb{N}$ together with an associative composition and there is a unit element in $O(1)$ (but no more elements, as required above).</p>
<p>Now $O$ freely generates a monoidal category $S(O)$: The objects are natural numbers and an arrow from $m$ to $n$ consists of a sequence of operations in $O$ with a total of $m$ inputs and a total of $n$ outputs. For example if $a\in O(3)$ and $b\in O(5)$, then $(a,b)$ is an arrow from $3+5=8$ to $2$. Composition is given by composition in the operad.</p>
<p>I know that $S(O)$ is aspherical when $O$ is free and also in some other special cases. Here I consider categories as spaces via the usual geometric realization, i.e. the geometric realization of the nerve of the category.</p>
<p>Question: Is the category $S(O)$ always aspherical?</p>
| James Griffin | 110 | <p>I've left my original answer as some people may find it of interest.</p>
<p>I have a candidate counterexample. The idea is to find a (non-symmetric) set operad in between the free operad $Free_2$ on a single arity 2 generator and the associative operad $As$. The example I've chosen is the operad $P$ which is isomorphic to the free operad in arities 1, 2 and 3, but trivial for arities 4 and above. There is a diagram
$$ Free_2 \rightarrow P \rightarrow As $$</p>
<p>The monoidal category $S(P)$ defined in the question has contractible fundamental groupoid. I'll leave this as an exercise. It is quick if you are used to representing Thompson's group F via pairs of trees: the trees with k leaves are all equivalent in P for $k>4$, but any group element in F is represented (in perhaps a non-reduced way) by a pair of trees with more than 4 leaves. </p>
<p>To finish we can construct a non-trivial cycle in the homology of the nerve. My guess is the following:<img src="https://i.stack.imgur.com/96KaH.png" alt="2-chain in the nerve">.</p>
<p>I hope that it is fairly clear which element this is, each tree is meant to represent two composable morphisms. You can check that it's a 2-cycle with relative ease, the tricky bit is to show that it's non-trivial. I'm not 100% sure that it is, but I'll explain why I chose it. The point is that either the first two or last two terms are chosen to kill the 1-cycle which is the difference of the two trees of arity 3. In the group F this is a representative of the 1st homology group, but for S(P) it is zero. There is more than one way to kill this 1-cycle and the 2-cycle above is the difference of these two.</p>
<p>To prove that it's not a boundary I guess an explicit calculation using the fact that the homology of F is easy to calculate could do the trick.</p>
<p>Don't hesitate to ask me to expand on any of this.</p>
|
2,002,601 | <p>Given n+1 data pairs $(x_0,y_0)...(x_n,y_n)$ for j=0,1,2...,n we have
$p_j=\prod_{i\neq j}(x_j-x_i)$ and $\psi(x)=\prod_{i=0}^n(x-x_i)$.</p>
<p>I am having trouble determining what $\psi(x_j)$ is and what $\psi'(x_j)$ would be. </p>
<p>I feel like $\psi(x_j)= 0$ because it would contain the $x_j-x_j$ term... But I feel like I am missing something...</p>
| parsiad | 64,601 | <p><strong>Hint</strong>: $\tanh(nx)\rightarrow \operatorname{sign}(x)$ pointwise as $n\rightarrow\infty$. Now, use the fact that the uniform limit of continuous functions is continuous for a locally compact space (such as $\mathbb{R}$).</p>
|
2,002,601 | <p>Given n+1 data pairs $(x_0,y_0)...(x_n,y_n)$ for j=0,1,2...,n we have
$p_j=\prod_{i\neq j}(x_j-x_i)$ and $\psi(x)=\prod_{i=0}^n(x-x_i)$.</p>
<p>I am having trouble determining what $\psi(x_j)$ is and what $\psi'(x_j)$ would be. </p>
<p>I feel like $\psi(x_j)= 0$ because it would contain the $x_j-x_j$ term... But I feel like I am missing something...</p>
| hamam_Abdallah | 369,188 | <p>As $\tanh$ is an odd function, we will assume $x>0$</p>
<p>$$\tanh(nx)=\frac{1-e^{-2nx}}{1+e^{-2nx}}$$</p>
<p>thus</p>
<p>$\lim_{n\to\infty}\tanh(nx)=1$</p>
<p>if $x=0, f_n(0)=0$.</p>
<p>all the functions $f_n$ are continuous at $\mathbb R$.</p>
<p>the pointwise limit function is not continuous at $0$, thus the convergence is not uniform at $\mathbb R$.</p>
|
145,286 | <p>Yesterday I got into an argument with @UnchartedWorks over <a href="https://mathematica.stackexchange.com/a/145207/26956">in the comment thread here</a>. At first glance, he posted a duplicate of <a href="https://mathematica.stackexchange.com/a/145202/26956">Marius' answer</a>, but with some unnecessary memoization:</p>
<pre><code>unitize[x_] := unitize[x] = Unitize[x]
pick[xs_, sel_, patt_] := pick[xs] = Pick[xs, sel, patt]
</code></pre>
<p>and proposed the following test to justify his claim that his approach is faster:</p>
<pre><code>RandomSeed[1];
n = -1;
data = RandomChoice[Range[0, 10], {10^8, 3}];
AbsoluteTiming[Pick[data, Unitize@data[[All, n]], 1] // Length]
AbsoluteTiming[pick[data, unitize@data[[All, n]], 1] // Length]
(*
{7.3081, 90913401}
{5.87919, 90913401}
*)
</code></pre>
<p>A significant difference. Naturally, I was skeptical. The evaluation queue for his <code>pick</code> is (I believe) as follows:</p>
<ol>
<li><code>pick</code> is inert, so evaluate the arguments.</li>
<li><code>data</code> is just a list, <code>1</code> is inert, <code>data[[All, n]]</code> quickly evaluates to a list</li>
<li><code>unitize@data[[All, n]]</code> writes a large <code>DownValue</code>...</li>
<li>...calling <code>Unitize@data[[All, n]]</code> in the process, returning the unitized list.</li>
<li>Another large <code>DownValue</code> of the form <code>pick[data] = *pickedList*</code> is created (<code>data</code> here is, of course, meant in its evaluated form), never to be called again (unless, for some reason, we explicitly type <code>pick[data]</code>).</li>
<li>The <code>*pickedList*</code> is returned.</li>
</ol>
<p>What about the evaluation queue for <code>Pick[data, Unitize@data[[All, n]], 1]</code>?</p>
<ol>
<li><code>Pick</code> is inert.</li>
<li><code>data</code> becomes an inert list, <code>1</code> is inert, <code>data[[All, n]]</code> quickly evaluates to an inert list.</li>
<li>Nothing happens here.</li>
<li><code>Unitize@data[[All, n]]</code> returns the unitized list.</li>
<li>Nothing happens here either.</li>
<li>The same step as before is taken to get us the picked list.</li>
</ol>
<p>So, clearly <code>pick</code> has more things to do than <code>Pick</code>.</p>
<p>To test this out I run the following code:</p>
<pre><code>Quit[]
$HistoryLength = 0;
Table[
Clear[pick, unitize, data];
unitize[x_] := unitize[x] = Unitize[x];
pick[xs_, sel_, patt_] := pick[xs] = Pick[xs, sel, patt];
data = RandomChoice[Range[0, 10], {i*10^7, 3}];
{Pick[data, Unitize@data[[All, -1]], 1]; // AbsoluteTiming // First,
pick[data, unitize@data[[All, -1]], 1]; // AbsoluteTiming // First},
{i, 5}]
</code></pre>
<p>Much to my surprise, <code>pick</code> is <em>consistently</em> faster!</p>
<blockquote>
<pre><code>{{0.482837, 0.456147},
{1.0301, 0.90521},
{1.46596, 1.35519},
{1.95202, 1.8664},
{2.4317, 2.37112}}
</code></pre>
</blockquote>
<p>How can I <s>protect myself from black magic</s> make a representative test? Or <s>should I embrace the black magic</s> is this real and a valid way to speed things up?</p>
<p><strong>Update re: answer by Szabolcs</strong></p>
<p>Reversing the order of the list like so:</p>
<pre><code>{pick[data, unitize@data[[All, -1]], 1]; // AbsoluteTiming // First,
Pick[data, Unitize@data[[All, -1]], 1]; // AbsoluteTiming // First}
</code></pre>
<p>gave me the following result:</p>
<blockquote>
<pre><code>{{0.466251, 0.497084},
{1.18016, 1.17495},
{1.34997, 1.42752},
{1.80211, 1.93181},
{2.25766, 2.39347}}
</code></pre>
</blockquote>
<p>Once again, regardless of order of operations, <code>pick</code> is faster. Caching could be suspect, and as mentioned in the comment thread of the other question, I did try throwing in a <code>ClearSystemCache[]</code> between the <code>pick</code> and <code>Pick</code>, but that didn't change anything.</p>
<p>Szabolcs suggested that I throw out the memoization and just use wrapper functions. I presume, he meant this:</p>
<pre><code>unitize[x_] := Unitize[x];
pick[xs_, sel_, patt_] := Pick[xs, sel, patt];
</code></pre>
<p>As before, on a fresh kernel I set history length to 0 and run the <code>Table</code> loop. I get this:</p>
<pre><code>{{0.472934, 0.473249},
{0.954632, 0.96373},
{1.42848, 1.43364},
{1.91283, 1.90989},
{2.37743, 2.40031}}
</code></pre>
<p>i.e. nearly equal results, sometimes one is faster, sometimes the other (left column is <code>pick</code>, right is <code>Pick</code>). The functions perform as well as <code>Pick</code> in a fresh kernel.</p>
<p>I try again with the memoization as described towards the beginning of the answer:</p>
<pre><code>{{0.454302, 0.473273},
{0.93477, 0.947996},
{1.35026, 1.4196},
{1.79587, 1.90001},
{2.24727, 2.38676}}
</code></pre>
<p>The memoized <code>pick</code> and <code>unitize</code> perform consistently better out of a fresh kernel. Of course, it uses twice the memory along the way.</p>
| Szabolcs | 12 | <p>You are absolutely correct that this memoization is completely unnecessary.</p>
<p>What seems to happens is that from the second run onwards on the same data, the builtin functions become faster. I do not understand why (perhaps some internal caching), but it does show that it has absolutely nothing to do with the memoization:</p>
<pre><code>In[38]:= AbsoluteTiming[Pick[data, Unitize@data[[All, n]], 1] // Length]
AbsoluteTiming[Pick[data, Unitize@data[[All, n]], 1] // Length]
AbsoluteTiming[Pick[data, Unitize@data[[All, n]], 1] // Length]
Out[38]= {10.6117, 90909421}
Out[39]= {8.08706, 90909421}
Out[40]= {7.96311, 90909421}
</code></pre>
<p>Another comment:</p>
<p><code>RandomSeed</code> is an option, not a function. It should be <code>SeedRandom</code>, otherwise it does nothing.</p>
|
145,286 | <p>Yesterday I got into an argument with @UnchartedWorks over <a href="https://mathematica.stackexchange.com/a/145207/26956">in the comment thread here</a>. At first glance, he posted a duplicate of <a href="https://mathematica.stackexchange.com/a/145202/26956">Marius' answer</a>, but with some unnecessary memoization:</p>
<pre><code>unitize[x_] := unitize[x] = Unitize[x]
pick[xs_, sel_, patt_] := pick[xs] = Pick[xs, sel, patt]
</code></pre>
<p>and proposed the following test to justify his claim that his approach is faster:</p>
<pre><code>RandomSeed[1];
n = -1;
data = RandomChoice[Range[0, 10], {10^8, 3}];
AbsoluteTiming[Pick[data, Unitize@data[[All, n]], 1] // Length]
AbsoluteTiming[pick[data, unitize@data[[All, n]], 1] // Length]
(*
{7.3081, 90913401}
{5.87919, 90913401}
*)
</code></pre>
<p>A significant difference. Naturally, I was skeptical. The evaluation queue for his <code>pick</code> is (I believe) as follows:</p>
<ol>
<li><code>pick</code> is inert, so evaluate the arguments.</li>
<li><code>data</code> is just a list, <code>1</code> is inert, <code>data[[All, n]]</code> quickly evaluates to a list</li>
<li><code>unitize@data[[All, n]]</code> writes a large <code>DownValue</code>...</li>
<li>...calling <code>Unitize@data[[All, n]]</code> in the process, returning the unitized list.</li>
<li>Another large <code>DownValue</code> of the form <code>pick[data] = *pickedList*</code> is created (<code>data</code> here is, of course, meant in its evaluated form), never to be called again (unless, for some reason, we explicitly type <code>pick[data]</code>).</li>
<li>The <code>*pickedList*</code> is returned.</li>
</ol>
<p>What about the evaluation queue for <code>Pick[data, Unitize@data[[All, n]], 1]</code>?</p>
<ol>
<li><code>Pick</code> is inert.</li>
<li><code>data</code> becomes an inert list, <code>1</code> is inert, <code>data[[All, n]]</code> quickly evaluates to an inert list.</li>
<li>Nothing happens here.</li>
<li><code>Unitize@data[[All, n]]</code> returns the unitized list.</li>
<li>Nothing happens here either.</li>
<li>The same step as before is taken to get us the picked list.</li>
</ol>
<p>So, clearly <code>pick</code> has more things to do than <code>Pick</code>.</p>
<p>To test this out I run the following code:</p>
<pre><code>Quit[]
$HistoryLength = 0;
Table[
Clear[pick, unitize, data];
unitize[x_] := unitize[x] = Unitize[x];
pick[xs_, sel_, patt_] := pick[xs] = Pick[xs, sel, patt];
data = RandomChoice[Range[0, 10], {i*10^7, 3}];
{Pick[data, Unitize@data[[All, -1]], 1]; // AbsoluteTiming // First,
pick[data, unitize@data[[All, -1]], 1]; // AbsoluteTiming // First},
{i, 5}]
</code></pre>
<p>Much to my surprise, <code>pick</code> is <em>consistently</em> faster!</p>
<blockquote>
<pre><code>{{0.482837, 0.456147},
{1.0301, 0.90521},
{1.46596, 1.35519},
{1.95202, 1.8664},
{2.4317, 2.37112}}
</code></pre>
</blockquote>
<p>How can I <s>protect myself from black magic</s> make a representative test? Or <s>should I embrace the black magic</s> is this real and a valid way to speed things up?</p>
<p><strong>Update re: answer by Szabolcs</strong></p>
<p>Reversing the order of the list like so:</p>
<pre><code>{pick[data, unitize@data[[All, -1]], 1]; // AbsoluteTiming // First,
Pick[data, Unitize@data[[All, -1]], 1]; // AbsoluteTiming // First}
</code></pre>
<p>gave me the following result:</p>
<blockquote>
<pre><code>{{0.466251, 0.497084},
{1.18016, 1.17495},
{1.34997, 1.42752},
{1.80211, 1.93181},
{2.25766, 2.39347}}
</code></pre>
</blockquote>
<p>Once again, regardless of order of operations, <code>pick</code> is faster. Caching could be suspect, and as mentioned in the comment thread of the other question, I did try throwing in a <code>ClearSystemCache[]</code> between the <code>pick</code> and <code>Pick</code>, but that didn't change anything.</p>
<p>Szabolcs suggested that I throw out the memoization and just use wrapper functions. I presume, he meant this:</p>
<pre><code>unitize[x_] := Unitize[x];
pick[xs_, sel_, patt_] := Pick[xs, sel, patt];
</code></pre>
<p>As before, on a fresh kernel I set history length to 0 and run the <code>Table</code> loop. I get this:</p>
<pre><code>{{0.472934, 0.473249},
{0.954632, 0.96373},
{1.42848, 1.43364},
{1.91283, 1.90989},
{2.37743, 2.40031}}
</code></pre>
<p>i.e. nearly equal results, sometimes one is faster, sometimes the other (left column is <code>pick</code>, right is <code>Pick</code>). The functions perform as well as <code>Pick</code> in a fresh kernel.</p>
<p>I try again with the memoization as described towards the beginning of the answer:</p>
<pre><code>{{0.454302, 0.473273},
{0.93477, 0.947996},
{1.35026, 1.4196},
{1.79587, 1.90001},
{2.24727, 2.38676}}
</code></pre>
<p>The memoized <code>pick</code> and <code>unitize</code> perform consistently better out of a fresh kernel. Of course, it uses twice the memory along the way.</p>
| jkuczm | 14,303 | <p>I can't reproduce claimed speedup on <code>"11.0.1 for Linux x86 (64-bit) (September 21, 2016)"</code>.</p>
<p>In my tests, custom function wrappers without memoization (as <a href="https://mathematica.stackexchange.com/questions/145286/throwaway-memoization-makes-built-ins-faster#comment390746_145286">suggested by Szabolcs</a>) consistently add overhead of about 1 µs, functions with memoization add 2-3 µs overhead, compared to built-ins. This overhead is measurable only for small lists, for larger lists it's completely negligible.</p>
<p>Important thing is that results of <code>AbsoluteTiming</code> are very volatile with median deviation from minimal value, for larger lists, from few to ten percent. I'm sure there are better ways to measure this volatility, I used median deviation just to have any estimate.</p>
<p>Code used for timings:</p>
<pre><code>$HistoryLength = 0;
minDev // ClearAll
minDev // Attributes = HoldFirst;
minDev[expr_, n_Integer?Positive] := Module[{res, min},
res = Table[expr, n];
min = Min@res;
{min, Median[res - min]}
]
testBuiltin@data_ := (
ClearSystemCache[];
Pick[data, Unitize@data, 1] // AbsoluteTiming // First
)
testCustom@data_ := (
ClearSystemCache[];
ClearAll[unitize, pick];
unitize[x_] := Unitize[x];
pick[xs_, sel_, patt_] := Pick[xs, sel, patt];
pick[data, unitize@data, 1] // AbsoluteTiming // First
)
testMemo@data_ := (
ClearSystemCache[];
ClearAll[unitize, pick];
unitize[x_] := unitize[x] = Unitize[x];
pick[xs_, sel_, patt_] := pick[xs] = Pick[xs, sel, patt];
pick[data, unitize@data, 1] // AbsoluteTiming // First
)
testAll[k_, n_] :=
With[{data = (SeedRandom[1]; RandomChoice[Range[0, 10], k])},
minDev[#@data, n] & /@ {testMemo, testCustom, testBuiltin}
]
format = TableForm@Map[
NumberForm[#, ExponentFunction -> (Null &)] &,
SetAccuracy[#, Min[Accuracy@SetPrecision[Min@#, 2], 7]],
{-1}
] &;
</code></pre>
<p>First argument of <code>testAll</code> is size of used <code>data</code>, second is number of repeated timings. First column of result is minimal absolute timing, second is median deviation from this minimal value. First rows are results for memoized custom <code>pick</code> and <code>unitize</code>, second rows are results for non-memoized custom functions, third rows are for built-in <code>Pick</code> and <code>Unitize</code>.</p>
<pre><code>testAll[10^1, 10^5]//format
(* 0.000004 0.*10^(-7)
0.000002 0.000001
0.000001 0.000001 *)
testAll[10^2, 10^5]//format
(* 0.000005 0.000001
0.000003 0.000001
0.000002 0.000001 *)
testAll[10^3, 10^5]//format
(* 0.000015 0.000001
0.000014 0.*10^(-7)
0.000013 0.000001 *)
testAll[10^4, 10^5]//format
(* 0.000124 0.000002
0.000122 0.000002
0.000121 0.000001 *)
testAll[10^5, 10^4]//format
(* 0.001297 0.000093
0.001296 0.000069
0.001295 0.000103 *)
testAll[10^6, 10^3]//format
(* 0.0201 0.0014
0.0201 0.0011
0.0201 0.0012 *)
testAll[10^7, 10^2]//format
(* 0.2004 0.0148
0.2003 0.0099
0.2004 0.0088 *)
testAll[5 10^7, 2 10^1]//format
(* 0.972 0.021
0.974 0.017
0.973 0.022 *)
</code></pre>
<h1>Fresh kernel</h1>
<p>To make sure that we're not using any <em>Mathematica</em>'s internal cache, that might not be cleared by <code>ClearSystemCache</code>, we can <code>launch</code> separate kernel for each test using:</p>
<pre><code>freshKernelEvaluate // ClearAll
freshKernelEvaluate // Attributes = HoldAll;
freshKernelEvaluate@expr_ := Module[{link, result},
link = LinkLaunch[First@$CommandLine <> " -mathlink -noprompt"];
LinkWrite[link, Unevaluated@EvaluatePacket@expr];
result = LinkRead@link;
LinkClose@link;
Replace[result, ReturnPacket@x_ :> x]
]
</code></pre>
<p>Timings of built-ins:</p>
<pre><code>resBuiltin = Table[
freshKernelEvaluate[
SeedRandom@1;
data = RandomChoice[Range[0, 10], 5 10^7];
Pick[data, Unitize@data, 1] // AbsoluteTiming // First
],
100
]
</code></pre>
<blockquote>
<p>{1.28392, 1.23527, 1.25863, 1.23625, 1.33601, 1.24361, 1.26809,
1.23502, 1.34473, 1.23813, 1.24654, 1.23617, 1.27127, 1.25661,
1.22674, 1.58978, 1.26939, 1.37024, 1.24581, 1.54075, 1.23516,
1.23805, 1.3053, 1.40044, 1.42726, 1.39822, 1.46109, 1.27038,
1.39617, 1.2588, 1.29047, 1.23082, 1.25069, 1.34985, 1.27281,
1.24016, 1.2642, 1.2511, 1.23745, 1.27978, 1.24066, 1.38282, 1.32234,
1.30623, 1.26118, 1.58021, 1.27522, 1.24706, 1.27051, 1.2493,
1.24819, 1.28184, 1.46254, 1.24269, 1.26356, 1.24011, 1.35468,
1.27491, 1.35288, 1.24462, 1.27119, 1.26811, 1.23685, 1.33249,
1.23138, 1.29139, 1.23725, 1.28638, 1.23906, 1.27579, 1.3872,
1.31602, 1.29556, 1.26464, 1.27076, 1.24602, 1.25735, 1.24667,
1.27297, 1.23757, 1.34311, 1.26616, 1.35083, 1.24861, 1.23788,
1.25357, 1.24262, 1.28117, 1.25753, 1.28231, 1.23406, 1.27971,
1.22885, 1.27199, 1.24191, 1.23346, 1.26387, 1.24803, 1.27653,
1.23953}</p>
</blockquote>
<p>Timings of memoized custom functions:</p>
<pre><code>resMemo = Table[
freshKernelEvaluate[
SeedRandom@1;
data = RandomChoice[Range[0, 10], 5 10^7];
unitize[x_] := unitize[x] = Unitize[x];
pick[xs_, sel_, patt_] := pick[xs] = Pick[xs, sel, patt];
pick[data, unitize@data, 1] // AbsoluteTiming // First
],
100
]
</code></pre>
<blockquote>
<p>{1.35284, 1.23307, 1.27167, 1.23678, 1.27437, 1.25009, 1.27847,
1.2418, 1.23227, 1.39655, 1.26371, 1.26179, 1.27424, 1.27965, 1.236,
1.28489, 1.25988, 1.26318, 1.24007, 1.24381, 1.2672, 1.25462,
1.26703, 1.24123, 1.28868, 1.24192, 1.27177, 1.23488, 1.23468,
1.27525, 1.26571, 1.27287, 1.23757, 1.26981, 1.25737, 1.2729,
1.23705, 1.24429, 1.26927, 1.23292, 1.28266, 1.23352, 1.28423,
1.23743, 1.26883, 1.23515, 1.27272, 1.25892, 1.23213, 1.23746,
1.3435, 1.27545, 1.23472, 1.49113, 1.42916, 1.56421, 1.5238, 1.37695,
1.27734, 1.23146, 1.2388, 1.24054, 1.27661, 1.23467, 1.43818,
1.51605, 1.28172, 1.24674, 1.34043, 1.36447, 1.28034, 1.23788,
1.3027, 1.25299, 1.26136, 1.24514, 1.23405, 1.26157, 1.24994,
1.27737, 1.23637, 1.26785, 1.411, 1.24163, 1.2301, 1.29223, 1.25492,
1.25177, 1.26862, 1.25825, 1.23715, 1.25327, 1.2694, 1.6624, 1.24317,
1.26682, 1.27915, 1.25705, 1.23258, 1.25804}</p>
</blockquote>
<p>I don't see any consistent difference, distribution of results seems similar:</p>
<pre><code>res = <|"Built-in" -> resBuiltin, "Memo" -> resMemo|>;
ListPlot[res, PlotRange -> All]
Histogram[res, PlotRange -> All, ChartStyle -> {Blue, Orange}, ChartLegends -> Automatic]
</code></pre>
<p><a href="https://i.stack.imgur.com/THPxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/THPxN.png" alt="ListPlot and Histogram data vector"></a></p>
<h3>Edit</h3>
<p><a href="https://mathematica.stackexchange.com/questions/145286/throwaway-memoization-makes-built-ins-faster#comment390952_145327">UnchartedWorks writes in a comment</a> that <code>unitize</code> is enough to show difference in speed and <code>pick</code> is not necessary. <a href="https://mathematica.stackexchange.com/questions/145286/throwaway-memoization-makes-built-ins-faster#comment390961_145327">Carl Woll points out</a> that <code>data</code> matrix should be used instead of <code>data</code> vector.</p>
<p>After changing above I see a difference between memoized and built-in version. Memoized function is consistently <em>slower</em> than built-in.</p>
<p>Used test functions:</p>
<pre><code>testBuiltin = freshKernelEvaluate[
$HistoryLength = 0;
SeedRandom@1;
data = RandomChoice[Range[0, 10], {#, 3}];
Pick[data, Unitize@data[[All, -1]], 1] // AbsoluteTiming // First
] &;
testCustom = freshKernelEvaluate[
$HistoryLength = 0;
SeedRandom@1;
data = RandomChoice[Range[0, 10], {#, 3}];
unitize[x_] := Unitize[x];
Pick[data, unitize@data[[All, -1]], 1] // AbsoluteTiming // First
] &;
testMemo = freshKernelEvaluate[
$HistoryLength = 0;
SeedRandom@1;
data = RandomChoice[Range[0, 10], {#, 3}];
unitize[x_] := unitize[x] = Unitize[x];
Pick[data, unitize@data[[All, -1]], 1] // AbsoluteTiming // First
] &;
</code></pre>
<p>Timings:</p>
<pre><code>SetDirectory@NotebookDirectory[];
s = OpenWrite@"results.dat";
k = 3 10^7;
Do[
Write[s,
If[OddQ@i,
{testBuiltin@k, testCustom@k, testMemo@k},
Reverse@{testMemo@k, testCustom@k, testBuiltin@k}
]
],
{i, 10^2}
] // AbsoluteTiming
file = Close@s;
(* {898.653, Null} *)
</code></pre>
<p>Result analysis:</p>
<pre><code>results = AssociationThread[{"Built-in", "Custom", "Memo"} -> Transpose@ReadList@file]
colors = {Blue, Orange, Darker@Green};
ListPlot[results, PlotRange -> All, PlotStyle -> colors]
Histogram[results, PlotRange -> All, ChartStyle -> colors, ChartLegends -> Automatic]
</code></pre>
<blockquote>
<p><|"Built-in" -> {1.22985, 1.22461, 1.23061, 1.23184, 1.2402, 1.22937,
1.25221, 1.21342, 1.23612, 1.22765, 1.23061, 1.23409, 1.25464,
1.21786, 1.23144, 1.24461, 1.24803, 1.24498, 1.24818, 1.23294,
1.2348, 1.51256, 1.51016, 1.46498, 1.48277, 1.49113, 1.38432,
1.23417, 1.23139, 1.23475, 1.23356, 1.22846, 1.23629, 1.25202,
1.23593, 1.24975, 1.22473, 1.23137, 1.2266, 1.25627, 1.21828,
1.2525, 1.23725, 1.24693, 1.24163, 1.23324, 1.28597, 1.23083,
1.22618, 1.23927, 1.22844, 1.23095, 1.21823, 1.23546, 1.23057,
1.22338, 1.22514, 1.23199, 1.23086, 1.21832, 1.22947, 1.22668,
1.2302, 1.24527, 1.23862, 1.48311, 1.48445, 1.47365, 1.24457,
1.25607, 1.26731, 1.22819, 1.23567, 1.23589, 1.27261, 1.22645,
1.22554, 1.23832, 1.22731, 1.2334, 1.25166, 1.26591, 1.22114,
1.24653, 1.22359, 1.22788, 1.22567, 1.25535, 1.23223, 1.24091,
1.24912, 1.23169, 1.23663, 1.23177, 1.2278, 1.55135, 1.4796,
1.49146, 1.49611, 1.23101},
"Custom" -> {1.23652, 1.23587, 1.23412, 1.22896, 1.22707, 1.23646,
1.25783, 1.26341, 1.24158, 1.22581, 1.22999, 1.24083, 1.23376,
1.23851, 1.24782, 1.22384, 1.2431, 1.23661, 1.23801, 1.24318,
1.23982, 1.53433, 1.48343, 1.54463, 1.48097, 1.47601, 1.23676,
1.24323, 1.2311, 1.22642, 1.23351, 1.23296, 1.23254, 1.23407,
1.23169, 1.24395, 1.24042, 1.24769, 1.23167, 1.21756, 1.2301,
1.23421, 1.24282, 1.23704, 1.23525, 1.2351, 1.25029, 1.23524,
1.22839, 1.22839, 1.23667, 1.26583, 1.22544, 1.22955, 1.22292,
1.22819, 1.27443, 1.24958, 1.24789, 1.22195, 1.21883, 1.22279,
1.21813, 1.22052, 1.23921, 1.5044, 1.49484, 1.50915, 1.23095,
1.23694, 1.22373, 1.24806, 1.22945, 1.24085, 1.23373, 1.22282,
1.2362, 1.23099, 1.23932, 1.24258, 1.25047, 1.26868, 1.23042,
1.22579, 1.2229, 1.23243, 1.2368, 1.22925, 1.2387, 1.23014,
1.21772, 1.2259, 1.22549, 1.23208, 1.26501, 1.33781, 1.48822,
1.48658, 1.25979, 1.26228},
"Memo" -> {1.29497, 1.29798, 1.29918, 1.29907, 1.31014, 1.29503,
1.29095, 1.3249, 1.29036, 1.30051, 1.2789, 1.29959, 1.2988, 1.2882,
1.29519, 1.28946, 1.31952, 1.32948, 1.32447, 1.29627, 1.31841,
1.5721, 1.57097, 1.55392, 1.56358, 1.55974, 1.28744, 1.3029,
1.28567, 1.2914, 1.29167, 1.29062, 1.29471, 1.29797, 1.30193,
1.30423, 1.30097, 1.29706, 1.29027, 1.29005, 1.29543, 1.2929,
1.29996, 1.29386, 1.29502, 1.31621, 1.31506, 1.29105, 1.30462,
1.28348, 1.30922, 1.28715, 1.30386, 1.29361, 1.29596, 1.30149,
1.28943, 1.29833, 1.31909, 1.2911, 1.31163, 1.28986, 1.29063,
1.28847, 1.29451, 1.46695, 1.55118, 1.55433, 1.29779, 1.29201,
1.29947, 1.29045, 1.28494, 1.29003, 1.29385, 1.2856, 1.31603,
1.33432, 1.28929, 1.29873, 1.29259, 1.28694, 1.28868, 1.28838,
1.29824, 1.29435, 1.29401, 1.30137, 1.2971, 1.29248, 1.29333,
1.2847, 1.28666, 1.28647, 1.29923, 1.30116, 1.56112, 1.56282,
1.29155, 1.2936}|></p>
</blockquote>
<p><a href="https://i.stack.imgur.com/FgBaV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FgBaV.png" alt="ListPlot and Histogram data matrix"></a></p>
|
3,831,387 | <p><span class="math-container">$X,Y\sim N(0,1)$</span> and are independent, consider <span class="math-container">$X+Y$</span> and <span class="math-container">$X-Y$</span>.</p>
<p>I can see why <span class="math-container">$X+Y$</span> and <span class="math-container">$X-Y$</span> are independent based on the fact that their joint distribution is equal to the product of their marginal distributions. Just, I'm having trouble understanding <em>intuitively</em> why this is so.</p>
<p>This is how I see it : When you look at <span class="math-container">$X+Y=u$</span>, the set <span class="math-container">$\{(x,u-x)|x\in\mathbb{R}\}$</span> is the list of possibilities for <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>.</p>
<p>And intuitively, I understand independence of two random variables <span class="math-container">$A$</span> and <span class="math-container">$B$</span> as, the probability of the event <span class="math-container">$A=a$</span> being completely unaffected by the event <span class="math-container">$B=b$</span> happening.</p>
<p>But when you look at <span class="math-container">$X+Y=u$</span> given that <span class="math-container">$X-Y=v$</span>, the set of possibilities has only one value <span class="math-container">$(\frac{u+v}{2},\frac{u-v}{2})$</span>.</p>
<p>So, <span class="math-container">$\mathbb{P}(X+Y=u|X-Y=v)\neq \mathbb{P}(X+Y=u)$</span>.</p>
<p>Doesn't this mean that <span class="math-container">$X+Y$</span> is affected by the occurrance of <span class="math-container">$X-Y$</span>?
So, they would have to be dependent?
I'm sorry if this comes off as really stupid, it has been driving me crazy, even though I am sure that they are independent, it just doesn't feel right.</p>
<p>Thank you.</p>
| antkam | 546,005 | <p>(1) The short, short answer is that it is <strong>wrong</strong> to say</p>
<p><span class="math-container">$$\mathbb{P}(X+Y=u|X-Y=v)\neq \mathbb{P}(X+Y=u)\,\,\,\,\,\,\text{(this is wrong)}$$</span></p>
<p>because in fact, both sides <span class="math-container">$=0$</span>, as these are continuous variables.</p>
<p>(2) The longer answer... Well first of all, the proper way to decide independence is to look at the joint PDF of <span class="math-container">$U = X+Y$</span> and <span class="math-container">$V=X-Y$</span>, as you have already done. This is equivalent to checking:</p>
<p><span class="math-container">$$f_U(U = u) \overset{?}= f_{U|V}(U = u \mid V = v) \equiv \frac{f_{U,V}(U = u \cap V = v)}{f_V(V = v)}$$</span></p>
<p>where you will find that both sides are non-zero and indeed equal.</p>
<p>(3) However, I wonder if your confusion comes from a more basic misunderstanding. It is of course true that <span class="math-container">$(U,V) = (u,v)$</span> defines exactly a single point in <span class="math-container">$(X,Y)$</span> space. However this does not automatically imply the conditional (prob or density) is <span class="math-container">$<$</span> the unconditional. After all, remember that all conditional prob (or density) are <em>ratios</em>. So if the numerator is very small but the denominator is proportionally small, then the ratio is unchanged and the conditional prob (or density) equals the unconditional version.</p>
<p>In your example, the unconditional asks for hitting a certain line <span class="math-container">$X+Y = u$</span> within the entire <span class="math-container">$2$</span>-D <span class="math-container">$(X,Y)$</span> plane, while the conditional asks for hitting a point within a specific line <span class="math-container">$X-Y = v$</span>. As mentioned, both probabilities are zero, but as you verified, both densities are non-zero and equal.</p>
<p>(4) Finally, you might like to know that multivariate Gaussians are the only variables with this property. So that might explain why your gut just keeps telling you that <span class="math-container">$X+Y, X-Y$</span> "cannot possibly be independent" when <span class="math-container">$X,Y$</span> are independent. :) I was confused about this in the recent past -- see <a href="https://math.stackexchange.com/a/3708951/546005">this</a> for a brief further discussion.</p>
|
1,842,826 | <blockquote>
<p>Explain why the columns of a $3 \times 4$ matrix are linearly dependent</p>
</blockquote>
<p>I also am curious what people are talking about when they say "rank"? We haven't touched anything with the word rank in our linear algebra class.</p>
<p>Here is what I've came up with as a solution, will this suffice?</p>
<p>I know that the columns of a matrix $A$ are <strong>linearly independent</strong> <strong>iff</strong> the equation $Ax = 0$ has <strong>only</strong> the <strong>trivial solution</strong>. $\therefore$ If the equation $Ax= 0$ does <strong>not</strong> have <strong>only</strong> the <strong>trivial solution</strong> $\implies$ that the columns of the matrix $A$ are <strong>linearly dependent</strong>?</p>
<p><strong>UPDATE</strong>
I don't understand why a $3x4$ matrix is always linearly dependent.. what about $\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\end{bmatrix}$</p>
<p>where $x_1 = 0$
now $x_1= x_2 = x_3...$
then we can see that $x_1v_1 + x_2v_2 + x_3.. = 0 $ and we have the trivial solution?</p>
| Ethan Bolker | 72,858 | <p>Without knowing how far you've gotten in your linear algebra class it's hard top produce a proof at the right level.</p>
<p>What's really going on here is that the four columns of a matrix with three rows are vectors in three dimensional space. Since the dimension of the space is three, any set with more than three vectors must be dependent. Of course you can't use that as a proof unless you've gotten that far in your studies.</p>
|
246,589 | <p>Solve the boundary value problem
$$\begin{cases} \displaystyle \frac{\partial u}{\partial t} = 2 \frac{\partial^2 u}{\partial x^2} \\ \ \\u(0,t) = 10 \\ u(3,t) = 40 \\ u(x, 0) = 25 \end{cases}$$</p>
| Frenzy Li | 32,803 | <p><em><strong>HINT:</em></strong> Let's make the boundary condition homogenous by $v(x,t)=u(x,t)-10-10x$.<br>
(<em>How to see that?</em>). The pde for $v(x,t)$ is thus: (Notice that $v_t=u_t$, $v_{xx}=u_{xx}$, <em>and why?</em>)</p>
<p>$$\begin{cases}
v_t - 2 v_{xx} = 0, & 0<x<3, t>0, \\
v(0,t)=v(3,t)= 0, & t\geqslant 0, \\
v(x,0)=15-10x, & 0<x<l.
\end{cases}$$</p>
<p>This question should be familiar to you now, as it's got homogenous <em>b.c.</em> and good <em>i.c.</em>. It shouldn't be hard once you find $v$, because you get $u$ the instant you get $v$.</p>
<p><em><strong>Next</em></strong>: Familiar stuffs: Apply the method of separation of variables, <em>i.e.</em> $v(x,t)=X(x)T(t)$ and solve for the eigenvalues and eigenfunctions. Expand $f(x)=15-x$ into the Fourier series based on the eigenvalues, and solve for the specific result.</p>
|
308,117 | <p>I have the matrix
$$A := \begin{bmatrix}6& 9& 15\\-5& -10& -21\\ 2& 5& 11\end{bmatrix}.$$ Can anyone please tell me how to both find the eigenspaces by hand and also by using the Nullspace command on maple? Thanks.</p>
| Amzoti | 38,839 | <p>Given the matrix </p>
<p>$$A = \left(\begin{matrix}6& 9& 15\\-5& -10& -21\\ 2& 5& 11\end{matrix}\right).$$</p>
<p>Find the Eigensystem by hand.</p>
<p>First, lets find the eigenvalues by solving $det(A - \lambda I) = 0$, so we have:</p>
<p>$$det(A - \lambda I) = \left|\begin{matrix}6 - \lambda & 9& 15\\-5& -10 - \lambda & -21\\ 2& 5& 11 - \lambda\end{matrix}\right| = 0.$$</p>
<p>This gives us the characteristic polynomial: </p>
<p>$$-\lambda^3 + 7\lambda^2 - 16\lambda + 12 = 0$$</p>
<p>From this we get two eigenvalues (one is repeated) as: $\lambda_1 = 3, ~ \lambda_{2,3} = 2$</p>
<p>Next, we want to find the eigenvector for the eigenvalue $\lambda_1$, by solving the equation $(A - \lambda_1) v_1 = 0$.</p>
<p>$(A-\lambda_1)v_1 = (A-3)v_1 = \left(\begin{matrix}3 & 9& 15\\-5& -13 & -21\\ 2& 5& 8\end{matrix}\right)v_1 = 0.$</p>
<p>Using the row-reduced-echelon-form, this leads to $v_1 = (1,-2,1).$</p>
<p>Next, we want to find the eigenvector for the eigenvalue $\lambda_2$, by solving the equation $(A - \lambda_2) v_2 = 0.$</p>
<p>$(A-\lambda_2)v_2 = (A-2)v_2 = \left(\begin{matrix}4 & 9& 15\\-5& -12 & -21\\ 2& 5& 9\end{matrix}\right)v_2 = 0.$</p>
<p>Using the row-reduced-echelon-form, this leads to $v_2 = (3,-3,1).$</p>
<p>Since we have a repeated eigenvalue, care needs to be taken using algebraic and geometric multiplicities (know what those are), if the matrix is diagonalizable (you can work these details).</p>
<ul>
<li><p>The algebraic multiplicity of an eigenvalue is the number of times it is a root.</p></li>
<li><p>The geometric multiplicity of an eigenvalue is the number of linearly independent eigenvectors for the eigenvalue.</p></li>
</ul>
<p>To find the eigenvector for $\lambda_3$, we will solve $(A - \lambda_3 I)v_3 = v_2$ (you must have learned why this is in class), so we have (shown in augmented form):</p>
<p>$\left(\begin{array}{@{}ccc|c@{}}
4 & 9& 15 & 3\\-5& -13 & -21 & -3\\ 2& 5& 8 & 1
\end{array}
\right)v_3 = 0$</p>
<p>Using RREF, this results in $v_3 = (3, -1, 0)$.</p>
<p>Thus, we have:</p>
<p>$$\lambda_1 = 3, v_1 = (1, -2, 1)$$</p>
<p>$$\lambda_2 = 2, v_2 = (3, -3, 1)$$</p>
<p>$$\lambda_2 = 2, v_3 = (3,-1, 0)$$</p>
<p>Do you know how to use the information above to write the diagonal form, otherwise known as the <a href="http://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow"><em>Jordan Normal Form</em></a>?</p>
<p>$$A = P J P^{-1} = \begin{bmatrix} 3 & 3 & 1 \\ -3 & -1 & -2 \\ 1 & 0 & 1\end{bmatrix} \cdot \begin{bmatrix} 2 & 1 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{bmatrix} \cdot \begin{bmatrix} -1 & -3 & -5 \\ 1 & 2 & 3 \\ 1 & 3 & 6 \end{bmatrix}$$</p>
<p>Babak Sorouh already told you the nullspace and Mhenni Benghorbal showed the Maple commands, so no need to repeat that.</p>
<p>Regards</p>
|
4,136,248 | <p>Let <span class="math-container">$a,b\in\mathbb{R}^+$</span>.
Suppose that <span class="math-container">$\{x_n\}_{n=0}^\infty$</span> is a sequence satisfying
<span class="math-container">$$|x_n|\leq a|x_{n-1}|+b|x_{n-1}|^2, $$</span>
for all <span class="math-container">$n\in\mathbb{N}$</span>. How can we bound <span class="math-container">$|x_n|$</span> with a number <span class="math-container">$M_n$</span> depending on <span class="math-container">$n$</span>, <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, and <span class="math-container">$x_0$</span>?</p>
<p>That <span class="math-container">$|x_{n-1}|^2$</span> term is rather cumbersome to handle. Is there a combinatorial trick to overcome messy computations?</p>
<hr />
<p>To make the problem a bit easier, I am going to assume that
<span class="math-container">$$x_n=ax_{n-1}+bx_{n-1}^2.$$</span>
This implies the above inequality. Based on the answer of <a href="https://math.stackexchange.com/questions/704350/solve-a-quadratic-map">this</a> question, we can reduce the problem to
<span class="math-container">$$\hat x_n=\hat x_{n-1}^2+c,$$</span>
where <span class="math-container">$\hat x$</span> is some linear image of <span class="math-container">$x_n$</span> and <span class="math-container">$c$</span> is a constant depending on <span class="math-container">$a$</span> and <span class="math-container">$b$</span>. Maybe this is easier to bound <span class="math-container">$\hat x_n$</span>.</p>
| Alexandre Eremenko | 110,120 | <p>Let us define the sequence <span class="math-container">$y_n$</span> by <span class="math-container">$y_0=|x_0|$</span>, <span class="math-container">$y_{n+1}=ay_n+by_n^2$</span>. Then we have
<span class="math-container">$|x_n|\leq y_n$</span> for all <span class="math-container">$n$</span>, since <span class="math-container">$ay+by^2$</span> is increasing for positive <span class="math-container">$a,b$</span>,
and it is enough to estimate the positive sequence <span class="math-container">$y_n$</span>. Setting <span class="math-container">$z_n=by_n$</span>, we obtain a simpler recurrent relation
<span class="math-container">$z_{n+1}=az_n+z_n^2.$</span> For this last relation, the behavior depends on <span class="math-container">$a$</span>. If <span class="math-container">$a\geq 1$</span>, all orbits tend to <span class="math-container">$+\infty$</span> and <span class="math-container">$z_n\sim \exp(2^nu(a,z_0))$</span> where <span class="math-container">$u$</span> is a positive
function. If <span class="math-container">$a<1$</span>, then there is a positive fixed point <span class="math-container">$z^*=1-a$</span>. When <span class="math-container">$z_0>z^*$</span>
the behavior is as above, when <span class="math-container">$z_0<z^*$</span>, the orbit tends to <span class="math-container">$0$</span> as the geometric
progression: <span class="math-container">$z_n\sim a^n$</span>, and when <span class="math-container">$z_0=z^*$</span> then <span class="math-container">$z_n=z^*$</span>.</p>
<p>For the function <span class="math-container">$u(a,z)$</span>, there is no explicit expression, it is variously known as
the Green function of the complement of the Julia set, or the equilibrium potential of the Julia set. This function is actually easy to compute by the formula
<span class="math-container">$$u(a,z)=\lim_{n\to\infty}2^{-n}\log|f^n(z)|,$$</span>
where <span class="math-container">$f^n$</span> is the <span class="math-container">$n$</span>-th iteration of the function <span class="math-container">$f^1(z)=az+z^2.$</span> This expression converges very fast.</p>
|
89,810 | <p>I have defined a table </p>
<pre><code>Table[Table[
Graphics3D[
Cuboid[radijDensity[[j, i]] {-Sin[kotiDensity[[j, i]]],
1 - Cos[kotiDensity[[j, i]]], 0}, {radijDensity[[j, i]]*
Sin[kotiDensity[[j, i]]],
radijDensity[[j, i]] (1 - Cos[kotiDensity[[j, i]]]) +
visina[[j, i + 1]], 0}]], {i, 1, n + m - 10}], {j, 1,
Length[force[[All, 1]]], 1}]
</code></pre>
<p>of cuboids.</p>
<p>But what I want is to have all of them on the same plot but at different z axis values. And I don't know how to do it. Something like <code>ListPlot3D</code> just that I want it to show those cuboids.</p>
| BenP1192 | 30,524 | <p>Its difficult to work with your question since you don't define your functions and variables but hopefully this example will be enough.</p>
<p>Let's first make a table of cuboids with different z values (but using the same z-value within each cuboid so they are still rectangles). This examples uses the same x and y values for every <code>Cuboid</code> for simplicity, but you can obviously change that. </p>
<pre><code>list = Table[Cuboid[{0, 0, z}, {1, 1, z}], {z, 1, 5}]
</code></pre>
<p>Now apply <code>Graphics3D</code> to the list of cuboids</p>
<pre><code>Graphics3D @ list
</code></pre>
<p><a href="https://i.stack.imgur.com/Msn25.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Msn25.jpg" alt="Picture of Result"></a></p>
<p>By putting different z values for each cuboid, they appear at different levels.</p>
|
1,988,563 | <blockquote>
<p>Use the formal defintion to prove the given limit:
$$\lim_{x\to\frac13^+}\sqrt{\frac{3x-1}2}=0$$</p>
</blockquote>
<p>Not sure how to deal with $\sqrt\cdot$. Appreciate a hint.</p>
| ec92 | 34,552 | <p>You want to show that for any $\epsilon > 0$, there is $\delta >0$ such that if
$$ 0< x - \frac13 < \delta, $$
then
$$ \sqrt{\frac{3x-1}{2}} < \epsilon. $$</p>
<p>This is equivalent to
$$0 < 3x - 1 < 2 \epsilon^2,$$
or
$$ 0 <x - \frac13 < \frac23 \epsilon^2,$$
so you can choose $\delta$ based on $\epsilon$. </p>
|
3,422,830 | <blockquote>
<p>In the polynomial
<span class="math-container">$$
(x-1)(x^2-2)(x^3-3) \ldots (x^{11}-11)
$$</span>
what is the coefficient of <span class="math-container">$x^{60}$</span>? </p>
</blockquote>
<p>I've been trying to solve this question since a long time but I couldn't. I don't know whether opening the brackets would help because that is really a mess. I have run out of ideas. Would someone please help me to solve this question? </p>
| trancelocation | 467,003 | <p>The highest exponent possible is <span class="math-container">$1+2+ \cdots + 11 = 66$</span>.</p>
<p>Now, to create the exponent <span class="math-container">$60$</span>, you can only leave out the factors containing <span class="math-container">$(1,2,3),(2,4),(1,5)$</span> and <span class="math-container">$6$</span>. Hence,</p>
<ul>
<li><span class="math-container">$1+2+3 \Rightarrow$</span> gives coefficient <span class="math-container">$(-1)(-2)(-3) = -6$</span></li>
<li><span class="math-container">$2+4 \Rightarrow$</span> gives coefficient <span class="math-container">$(-2)(-4) = 8$</span></li>
<li><span class="math-container">$1+5 \Rightarrow$</span> gives coefficient <span class="math-container">$5$</span></li>
<li><span class="math-container">$6 \Rightarrow$</span> gives coefficient <span class="math-container">$-6$</span></li>
</ul>
<p>Summing up gives <span class="math-container">$13-12 = \boxed{1}$</span>.</p>
|
286,930 | <p>I have been assigned this problem for homework:</p>
<blockquote>
<p>Show that, if $a < b + \epsilon$ for every $\epsilon \gt 0$, then $a\le b$.</p>
</blockquote>
<p>I have tried to go about this using Induction, but I don't know what the base case would be. It is obvious to me in my mind, but I don't know how to put it into mathematical terms on paper. any hints?</p>
| Clayton | 43,239 | <p>Hint: Use contradiction. If $a>b$, then show $a$ is not less than $b+\frac{a-b}{2}$. This contradiction implies $a\leq b$.</p>
|
286,930 | <p>I have been assigned this problem for homework:</p>
<blockquote>
<p>Show that, if $a < b + \epsilon$ for every $\epsilon \gt 0$, then $a\le b$.</p>
</blockquote>
<p>I have tried to go about this using Induction, but I don't know what the base case would be. It is obvious to me in my mind, but I don't know how to put it into mathematical terms on paper. any hints?</p>
| Steven Gamer | 47,540 | <p>assume a > b. Then a-b > 0. Then let ϵ = a - b. Then a < b + a-b = a , a contradiction so a <= b.</p>
|
424,445 | <p>I'm studying Pattern recognition and statistics and almost every book I open on the subject I bump into the concept of <strong>Mahanalobis distance</strong>. The books give sort of intuitive explanations, but still not good enough ones for me to actually really understand what is going on. If someone would ask me "What is the Mahanalobis distance?" I could only answer: "It's this nice thing, which measures distance of some kind" :) </p>
<p>The definitions usually also contain eigenvectors and eigenvalues, which I have little trouble connecting to the Mahanalobis distance. I understand the definition of eigenvectors and eigenvalues, but how are they related to the Mahanalobis distance? Does it have something to do with changing the base in Linear Algebra etc.?</p>
<p>I have also read these former questions on the subject:</p>
<p><a href="https://stats.stackexchange.com/questions/41222/what-is-mahanalobis-distance-how-is-it-used-in-pattern-recognition">https://stats.stackexchange.com/questions/41222/what-is-mahanalobis-distance-how-is-it-used-in-pattern-recognition</a></p>
<p><a href="https://math.stackexchange.com/questions/261557/intuitive-explanations-for-gaussian-distribution-function-and-mahalanobis-distan">Intuitive explanations for Gaussian distribution function and mahalanobis distance</a></p>
<p><a href="http://www.jennessent.com/arcview/mahalanobis_description.htm" rel="nofollow noreferrer">http://www.jennessent.com/arcview/mahalanobis_description.htm</a></p>
<p>The answers are good and pictures nice, but still I don't <strong>really</strong> get it...I have an idea but it's still in the dark. Can someone give a "How would you explain it to your grandma"-explanation so that I could finally wrap this up and never again wonder what the heck is a Mahanalobis distance? :) Where does it come from, what, why? </p>
<p>I will post this question on two different forums so that more people could have a chance answering it and I think many other people might be interested besides me :) </p>
<p>Thank you in advance for help!</p>
| Avitus | 80,800 | <p>As a starting point, I would see the Mahalonobis distance as a suitable deformation of the usual Euclidean distance $d(x,y)=\sqrt{\langle x,y \rangle}$ between vectors $x$ and $y$ in $\mathbb R^{n}$. The extra piece of information here is that $x$ and $y$ are actually <em>random</em> vectors, i.e. 2 different realizations of a vector $X$ of random variables, lying in the background of our discussion. The question that the Mahalonobis tries to address is the following: </p>
<p>"how can I measure the "dissimilarity" between $x$ and $y$, knowing that they are realization of the same multivariate random variable?" </p>
<p>Clearly the dissimilarity of any realization $x$ with itself should be equal to 0; moreover, the dissimilarity should be a symmetric function of the realizations and should reflect the existence of a random process in the background. This last aspect is taken into consideration by introducing the covariance matrix $C$ of the multivariate random variable.</p>
<p>Collecting the above ideas we arrive quite naturally at </p>
<p>$$D(x,y)=\sqrt{\langle (x-y),C^{-1}(x-y)\rangle} $$</p>
<p>If the components $X_i$ of the multivariate random variable $X=(X_1,\dots,X_n)$ are uncorrelated, with, for example $C_{ij}=\delta_{ij}$ (we "normalized" the $X_i$'s in order to have $Var(X_i)=1$), then the Mahalonobis distance $D(x,y)$ <em>is</em> the Euclidean distance between $x$ and $y$. In presence non trivial correlations, the (estimated) correlation matrix $C(x,y)$ "deforms" the Euclidean distance.</p>
|
1,527,137 | <p>Usually one has the matrix and wishes to estimate the eigenvalues, but here it's the other way around: I have the positive eigenvalues of an unknown real positive definite matrix and I would like to say something about it's diagonal elements.</p>
<p>The only result I was able to find is that the sum of the eigenvalues coincides with the trace of the matrix, does anyone know of anything more specific? Or perhaps can point me to any literature that discusses this problem?</p>
| B. S. Thomson | 281,004 | <p>Well you are not getting much help here and perhaps the homework deadline is looming--so here is a push.</p>
<p>The function $f:[0,T]\to \mathbb{R}$ has bounded variation there. You already have seen, for any $0\leq a<b\leq T$, that $$V(f,[a,b]) + f(b)-f(a)\geq 0$$ is trivial.</p>
<p>What you need to show now is that $$V(f,[0,a]) + V(f,[a,b])= V(f,[0,b]).$$
As is often the case, to show equality you show inequality--two of them. That is you have two inequalities to establish:
$$V(f,[0,a]) + V(f,[a,b]) \leq V(f,[0,b])$$ and
$$V(f,[0,a]) + V(f,[a,b]) \geq V(f,[0,b]).$$
One is easy and the other will require you to get your hands dirty with partitions and the like. (Think of a partition of $[0,b]$ and what would happen if you add "$a$" to it.)</p>
<p>[Note that I am delaying the notation $S^f_t=V(f,[0,t])$ for later since it is not very intuitive and can easily mislead you.]</p>
<hr>
<p>Alternatively, you can just search around for textbooks that give a proof of Jordan's theorem for BV functions, that characterizes them as differences of monotone functions (which is what you are doing here). Some use exactly this method; Royden's <em>Real Analysis</em> uses the clever idea of defining two half-variations (a positive variation and a negative variation) that ends up in the same place. My favorite presentation of this topic is in Zygmund and Wheeden, <em>Measure and Integral</em>. The techniques needed to solve this problem are, while elementary, absolutely essential to an ability to understand and do real analysis. It is not a waste of time to linger on this problem until it is all completely transparent to you. </p>
|
104,297 | <p>How would I go about solving</p>
<p>$(1+i)^n = (1+\sqrt{3}i)^m$ for integer $m$ and $n$?</p>
<p>I have tried </p>
<pre><code>Solve[(1+I)^n == (1+Sqrt[3] I)^m && n ∈ Integers && m ∈ Integers, {n, m}]
</code></pre>
<p>but this does not give the answer in the 'correct' form.</p>
| rhermans | 10,397 | <pre><code>Last@Reap@Do[
If[
ReIm[(1 + I)^n] == ReIm[(1 + Sqrt[3] I)^m]
, Sow[{n, m}]
]
, {n, 100}
, {m, 100}
]
</code></pre>
<blockquote>
<pre><code>{{{24, 12}, {48, 24}, {72, 36}, {96, 48}}}
</code></pre>
</blockquote>
|
310,930 | <p>Let $U$ be the subspace of $\mathbb{R}^3$ spanned by $\{(1,1,0), (0,1,1)\}$. Find a subspace $W$ of $\Bbb R^3$ such that $\mathbb{R}^3 = U \oplus W$.</p>
<p>As I am having an examination tomorrow, it would be really helpful if one could explain the methodology for doing this problem. I am mostly interested in the methodology, rather than the result. </p>
<p>Thank you very much in advance.</p>
| Gerry Myerson | 8,269 | <p>Can you see that $W$ must be one-dimensional? So you are just looking for a single (non-zero) vector that's not in $U$ --- then let $W$ be the span of that vector. </p>
|
786,655 | <p>Say we have two r.v X and Y which are independent and differently distributed ( for e.g X follows a bell curve and Y follows an exponential distribution with parameter $\lambda > 0$</p>
<p>What are the different methods to numerically compute the distribution X+Y, X*Y, X/Y, min(X,Y) etc...?</p>
<p>I read about Mellin transform and Monte-Carlo simulation but it seemed to me that since these methods go back to a long time ago, there must be something that already exists for such operations within a library or a module on a programming language like Matlab or R (or any other platform)</p>
<p>Any ideas|suggestions on this matter would be greatly appreciated!</p>
| fgp | 42,986 | <p>If $X,Y$ are independent and have distribution function $F_X,F_Y$ and densities $f_X$,$f_Y$, you have $$\begin{eqnarray}
&P(X+Y \leq z) &=& \int_{x+y \leq z} f_X(x) f_Y(y) \,d(x,y) = \int_{-\infty}^\infty \int_{-\infty}^{z-x} f_X(x) f_Y(y) \,dy \,dx \\
&&=& \int_{-\infty}^\infty f_X(x) F_Y(z-x) \,dx \text{,} \\
&f_{X+Y}(z) &=& \frac{d}{dz}P(X+Y \leq z) = \int_{-\infty}^\infty f_X(x) f_Y(z-x)\,dx \\
\text{ and } \\ \\
&P(\min\{X,Y\} \leq z) &=& P(X \leq z \text{ or } Y \leq z) = F_X(z) + F_Y(z) - F_X(z)F_Y(z) \text{,} \\
&f_{\min X,Y}(z) &=& f_X(z) + f_Y(z) - f_X(z)F_Y(z) - F_X(z)f_Y(z) \\\text{ respectively } \\ \\
&P(\max\{X,Y\} \leq z) &=& F_X(z)F_Y(z) \text{,} \\
&f_{\max X,Y}(z) &=& f_X(z)F_Y(z) + F_X(z)f_Y(z) \text{.}
\end{eqnarray}$$
For $XY$ and $X/Y$ there are similar transformation formulas, and you can always evaluate the resulting integrals numerically. For $X+Y$, using that $f_{X+Y}$ is the convolution of $f_X$ and $f_Y$ might help.</p>
|
32,809 | <p>Is it possible to define new graphics directives?</p>
<p>For example, suppose I want to be able to use the following code:</p>
<pre><code>Graphics[{ BigPointSize[0.07], SmallPointSize[0.04],
Red, BigPoint[{1,1}], BigPoint[{1,3}], SmallPoint[{3,1}],
Blue, SmallPoint[{2,2}], SmallPoint[{3,2}], BigPoint[{0,0}]
}]
</code></pre>
<p>Is there any way to define <code>BigPointSize</code>, <code>SmallPointSize</code>, <code>BigPoint</code>, and <code>SmallPoint</code> so that this code will work as intended? Ideally <code>BigPointSize</code> and <code>SmallPointSize</code> should have all of the functionality of other graphics directives, e.g. scoping inside of lists, and the ability to call the command multiple times within the same list.</p>
<p>(Obviously it's possible to draw these points in other ways, but I'm curious whether it's possible to get this <em>syntax</em> to work.)</p>
<p><strong>Edit:</strong> Just to clarify, I would like <code>BigPointSize</code> and <code>SmallPointSize</code> to work the same way as PointSize and other graphics directives. For example, the code</p>
<pre><code>Graphics[{ BigPointSize[0.1],
{ BigPointSize[0.05], BigPoint[{0,0}] },
BigPoint[{1,0}]
}]
</code></pre>
<p>should produce one point of size <code>0.05</code> and one point of size <code>0.1</code>.</p>
| Carl Woll | 45,431 | <p>It is possible to use <a href="http://reference.wolfram.com/language/ref/Style" rel="noreferrer"><code>Style</code></a> options as "graphics directives", and <a href="http://reference.wolfram.com/language/ref/CurrentValue" rel="noreferrer"><code>CurrentValue</code></a> can be used to query the values of these options. For example, suppose we use <a href="http://reference.wolfram.com/language/ref/AutoIndent" rel="noreferrer"><code>AutoIndent</code></a> as the graphics directive:</p>
<pre><code>Graphics[
{
AutoIndent -> .1, {PointSize[Dynamic@CurrentValue@AutoIndent], Point[{0,0}]},
{AutoIndent -> .2, {PointSize[Dynamic@CurrentValue@AutoIndent], Point[{1,1}]}},
{PointSize[Dynamic@CurrentValue@AutoIndent], Point[{1,0}]}
},
ImageSize->200,
PlotRange->{{-1,2},{-1,2}},
Axes->True
]
</code></pre>
<p><a href="https://i.stack.imgur.com/8rFek.png" rel="noreferrer"><img src="https://i.stack.imgur.com/8rFek.png" alt="enter image description here"></a></p>
<p>Notice how the point at {1, 0} uses the initial <a href="http://reference.wolfram.com/language/ref/AutoIndent" rel="noreferrer"><code>AutoIndent</code></a> value. It is also possible to use <code>Typeset`MakeBoxes</code> so that <code>BigPointSize</code> and <code>BigPoint</code> only evaluate when the <a href="http://reference.wolfram.com/language/ref/Graphics" rel="noreferrer"><code>Graphics</code></a> expression is converted to boxes, as pointed out by Simon Woods in his <a href="https://mathematica.stackexchange.com/a/27209/45431">answer</a> to <a href="https://mathematica.stackexchange.com/q/27184/45431">How to create custom [<code>Graphics</code>](http://reference.wolfram.com/language/ref/Graphics) primitive?</a>. I will use a couple <a href="http://reference.wolfram.com/language/ref/AutoStyleOptions" rel="noreferrer"><code>AutoStyleOptions</code></a> settings as my "graphics directives":</p>
<pre><code>Typeset`MakeBoxes[BigPointSize[rhs_], StandardForm, Graphics] := Typeset`Hold[
"AutoStyleOptionsHighlightGlobalToLocalScopeConflicts"->rhs
]
Typeset`MakeBoxes[SmallPointSize[rhs_], StandardForm, Graphics] := Typeset`Hold[
"AutoStyleOptionsHighlightMissingArgumentsWithTemplate"->rhs
]
Typeset`MakeBoxes[BigPoint[a_], StandardForm, Graphics] := {
PointSize -> Dynamic @ Replace[
CurrentValue["AutoStyleOptionsHighlightGlobalToLocalScopeConflicts"],
Except[_?NumberQ]->.1
],
PointBox[a]
}
Typeset`MakeBoxes[SmallPoint[a_], StandardForm, Graphics] := {
PointSize -> Dynamic @ Replace[
CurrentValue["AutoStyleOptionsHighlightMissingArgumentsWithTemplate"],
Except[_?NumberQ]->.03
],
PointBox[a]
}
</code></pre>
<p>And here's an example:</p>
<pre><code>Graphics[
{
Line[{{0,-1},{6,0}}],
BigPoint[{1,0}], Red, SmallPoint[{1,.5}],
BigPointSize[0.02], SmallPointSize[.01],
BigPoint[{2,0}], Blue, SmallPoint[{2, .5}],
{
BigPointSize[0.03], SmallPointSize[.04], BigPoint[{3,0}],
Green, BigPointSize[0.04], BigPoint[{4,0}], SmallPoint[{4, .5}]
},
BigPoint[{5,0}], SmallPoint[{5, .5}]
},
Frame->True,
PlotRange->{{0,6},{-1,1}}
]
</code></pre>
<p><a href="https://i.stack.imgur.com/WAsrB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/WAsrB.png" alt="enter image description here"></a></p>
|
3,464,885 | <p>I need help integrating this one.
<span class="math-container">$\int \frac{\sin(50x)}{1+\cos^2(50x)}\,dx$</span></p>
<p>I started with <span class="math-container">$u = 50x$</span> as my <span class="math-container">$u$</span>-sub</p>
<p><span class="math-container">$$\int \frac{\sin(u)}{1+\cos^2(u)}\,dx$$</span></p>
| user284331 | 284,331 | <p><span class="math-container">\begin{align*}
\int\dfrac{\sin u}{1+\cos^{2}u}du&=-\int\dfrac{1}{1+\cos^{2}u}d(\cos u)=-\tan^{-1}(\cos u)+C.
\end{align*}</span></p>
|
3,464,885 | <p>I need help integrating this one.
<span class="math-container">$\int \frac{\sin(50x)}{1+\cos^2(50x)}\,dx$</span></p>
<p>I started with <span class="math-container">$u = 50x$</span> as my <span class="math-container">$u$</span>-sub</p>
<p><span class="math-container">$$\int \frac{\sin(u)}{1+\cos^2(u)}\,dx$$</span></p>
| Kenta S | 404,616 | <p><span class="math-container">\begin{equation}
\begin{split}
\int \frac{\sin(50x)}{1+\cos^2(50x)}dx&=\frac1{50}\int \frac{\sin(u)}{1+\cos^2(u)}du\\
&=-\frac1{50}\int \frac{(\cos(u))'}{1+\cos^2(u)}du\\
&=-\frac1{50}\arctan(\cos(u))+C\\
&=-\frac1{50}\arctan(\cos(50x))+C\\
\end{split}
\end{equation}</span></p>
|
135,663 | <p>It is a problem for a Hatcher's book, and it is my homework problem.</p>
<p>It is a section 2.2 problem 3, stating:</p>
<p>Let $f:S^n\to S^n$ be a map of degree zero. Show that there exist points $x,y \in S^n$ with $f(x)=x$ and $f(y)=-y$. Use this to show that if $F$ is a continuous vector filed defined on the unit ball $D^n$ in $\mathbb{R}^n$ such that $F(x) \neq 0$ for all $x$, then there exists a point on boundary of $D^n $ where $F$ points radially outward and another point on the boundary of $D^n $ where $F$ points radially inward.</p>
<p>I could get the first statement by the property of a degree. However, in order to use this fact to conclude that this fact applies to the second statement, I should know that $F$ restricted to $S^{n-1}$ and being normalized so that $\bar F:S^{n-1} \to S^{n-1}$ is of degree zero. If I can conclude that $\bar F$ is not surjective, then it's all done. However, I am not sure to show why $\bar F$ is of degree zero. </p>
<p>Any comment about this would be grateful! </p>
| Community | -1 | <p>Assume $H \leq A_5$ with $|H| = 15$ and let $X:=\{gH \mid g \in G\}$. Then $\# X = 4$. $G$ acts op $X$ by left multiplication i.e. $g'(gH) = (g'g)H$. Let $\alpha \in A_5$ be a 5-cycle. Then $\langle \alpha\rangle$ does act on $X$,too. But the length of an orbit divides the group-order which is 5. But $\# X = 4 < 5$ so each orbit contains only one element. That means $\alpha H = H$ for all $\alpha$. So $\alpha \in H$. There are 24 of those $\alpha$. Contradition because $H$ cannot contain more than 15 elements.</p>
|
604,070 | <p>While doing the proof of the existence of completion of a metric space, usually books give an idea that the missing limit points are added into the space for obtaining the completion. But I do not understand from the proof where we are using this idea as we just make equivalence classes of asymptotic Cauchy sequences and accordingly define the metric.</p>
| Henno Brandsma | 4,280 | <p>One way to construct the completion of a metric space $(X,d)$ is by isometrically embedding it into a large metric space, that is known to be complete already. </p>
<p>A classical way is due to (IIRC) Banach: define $CB(X)$ to be the set of all bounded continuous real-valued functions on $X$ with metric from the supremum norm. This is a Banach space (so a complete metric space in particular). </p>
<p>Fix $p \in X$; we can define for every $x \in X$ the function $f_x: X \rightarrow \mathbb{R}$ by $f_x(y) = d(x,y) - d(p,y)$. By standard facts this is a continuous function on $X$ and it is bounded by $d(x,p)$ (from the triangle inequality). It's not too hard to verify that $F(x) = f_x$ defines an isometry of $X$ into $CB(X)$. </p>
<p>The closure of $F[X]$ in $CB(X)$ is then a completion of $X$: it contains $X$ ( or really $F[X]$) as an isometric dense subset and is itself complete in its inherited metric as a closed subset of a complete metric space.</p>
<p>The extra points we add to $X$ to make it complete are then the points in $CB(X) \setminus F[X]$, as it were. </p>
<p>It's not as elegant as the usual construction (equivalence classes of Cauchy sequences) because we need to have $\mathbb{R}$ constructed first and know it to be complete in its usual metric (to get completeness of $CB(X)$). The equivalence class approach is more self-contained, though a bit more abstract. </p>
|
2,820,779 | <p>So i have this integral </p>
<p>$$\int_{\sqrt[3]{4}}^{\sqrt[3]{3+e}}x^2 \ln(x^3-3)\,dx.$$</p>
<p>I was thinking of using u subsitution to make everything easier. </p>
<p>I made $u = x^3-3$ and $du = 3x^2dx$.</p>
<p>So I would then re-write my integral as </p>
<p>$$1/3\int_{\sqrt[3]{4}}^{\sqrt[3]{3+e}} \ln(x^3-3)\.$$</p>
<p>How would I proceed from here. Should I plug in the integral values? Wouldn't I need to integrate the $\ln()$? Should I use u substitution again? Please help!</p>
<p>I already asked this question, so don't mark it as a duplicate. Unfortunately I couldn't understand what other people were writing.</p>
| Bernard | 202,857 | <p>When you integrate by substitution, you have the express the differential form under the integral sign: $f(x)\,\mathrm dx$ as a differential form $\;g(u)\,\mathrm d u$, and replace the bounds for the integral in $x$ with corresponding bounds for the new variable $u$.</p>
<p><em>Some details in this case</em>:</p>
<p>If you set $u=x^3-3$, you have $\mathrm d u=3x^2\,\mathrm d x$, so
$$\int x^2 \ln(x^3-3)\,dx=\int \ln(x^3-3)(x^2\,dx)=\int\ln u\,\frac13\mathrm du=\frac13\int\ln u\,\mathrm du .$$</p>
<p>Now let's take care of the bounds:</p>
<ul>
<li>$x=\sqrt[3]4\leftrightarrow u=4-3=1$,</li>
<li>$x=\sqrt[3]{3+\mathrm e}\leftrightarrow u=3+\mathrm e-3=\mathrm e$</li>
</ul>
<p>Thus the integral is
$$\frac13\int_1^{\mathrm e}\ln u\,\mathrm du=\frac13(u\ln u-u)\biggr|_1^{\mathrm e}=\frac13.$$</p>
|
2,647,123 | <p>I'm asked to to find a $3\times3$ matrix, in which no entry is $0$ but $A^2=0$. </p>
<p>The problem is if I I brute force it, I am left with a system of 6 equations (Not all of which are linear...) and 6 unknowns. Whilst I could in theory solve that, is there more intuitive way of solving this problem or am I going to have to brute force the solution?</p>
<p>Any suggestions would be greatly appreciated.</p>
| Henry | 6,460 | <p>Force the first element of $A^2$ to be $0$, for example by finding $b,c,d,e$ with $bd+ce \lt 0$ and let $a=\pm{\sqrt{-(bd+ce)}}$ </p>
<p>Then consider</p>
<p>\begin{bmatrix}a&b&c\\d&db/a&dc/a\\e&eb/a&ec/a\end{bmatrix}</p>
|
2,647,123 | <p>I'm asked to to find a $3\times3$ matrix, in which no entry is $0$ but $A^2=0$. </p>
<p>The problem is if I I brute force it, I am left with a system of 6 equations (Not all of which are linear...) and 6 unknowns. Whilst I could in theory solve that, is there more intuitive way of solving this problem or am I going to have to brute force the solution?</p>
<p>Any suggestions would be greatly appreciated.</p>
| Sarvesh Ravichandran Iyer | 316,409 | <p>To do this, first consider a trivial matrix with lots of zeros that does satisfy this condition. One easy one is the matrix with a single $1$ on the diagonal above the principal. That is:
$$
\begin{pmatrix}
0 & 1 & 0 \\
0 & 0& 0 \\
0 & 0 & 0
\end{pmatrix}
$$</p>
<p>Now, given any matrix $A$, consider for any invertible matrix $B$, the matrix $B^{-1}AB = C$. If $A^2 =0$, then $C^2 = B^{-1}ABB^{-1}AB = B^{-1}A^2B =0$. So we can modify the matrix above, which we'll call $D$, to get a matrix with no zeros. </p>
<p>Actually, I urge you to take the formula which you have for the inverse of a matrix, and compute $B^{-1}DB$. You will get that each entry of the matrix $B^{-1}DB$ actually looks like some entry(in fact, an entry of the first column) of $B^{-1}$ times some entry of $B$.(PLEASE NOTE : I expect this to be a simple task for you. If not doable, please tell me)</p>
<p>But entries of $B^{-1}$ are just the cofactors of $B$, with some sign, divided by the determinant. That is, <em>all we need to do is to ensure that the determinant, all cofactors and all entries of $B$ are non-zero</em>. This I expect to be a simple task by experiment : start with some fixed first row of $B$, say $1,2,3$ or something, then find non-zero entries of the second rows so that all cofactors are non-zero, and then do this for the third rows. This can be done by simple trial and error, and will give you a matrix $B$ such that $B^{-1} DB$ has non-zero entries.</p>
<p>This is the simplest approach if you are not keen on rotation matrices or anything of that ilk, mentioned in some other answers.</p>
<p>There are standard matrices which satisfy the condition required for $B$ above, like Hankel, Vandermonde and Toeplitz matrices, which in fact satisfy strict positivity of all minors. But even avoiding these, coming up with one such matrix should be the challenge if you are at a level where you cannot use rotation matrices etc.</p>
|
2,491,394 | <p>So, my problem is with Axiom 5 of the proof, where Gödel considers necessary existence as a property. However, by his own definition, a 'property' applies to objects including those whose necessary existence has not even been proven, as can be inferred from Theorem 1. This, to me, seems like the perfect example of question begging, and if such logic is to be used on other examples, the conclusions may be contradictory. For example, I can prove that a Godlike object doesn't exist using the same logic and assuming Gödel's axioms:</p>
<ol>
<li><span class="math-container">$Df. 1:A_φ(x)⇔(◇∃x⇒(◻∃x∧φ(x)))$</span></li>
<li><span class="math-container">$Ax. 1:(P(φ)∧◻∀x(φ(x)⇒ψ(x)))⇒P(ψ)$</span></li>
<li><span class="math-container">$Ax. 2:P(¬φ)⇔¬P(φ)$</span></li>
<li><span class="math-container">$Th. 1:P(φ)⇒◇∃x(φ(x))$</span></li>
<li><span class="math-container">$Ax. 3:P(◇∃x⇒◻∃x)$</span></li>
<li><span class="math-container">$Th. 2:∀φ(P(φ)⇒◻∃x(A_φ(x))$</span></li>
</ol>
<p>Ax.3 is inferred from Gödel's fifth axiom, where necessary existence is a positive property. From here, I can conclude that any positive property that one can think of exists. For example, if being a unicorn is a positive property (which it is) then invisible flying unicorns also exist (because God is also flying and invisible, so these are positive properties).</p>
<p>Note that I didn't, in any way, deviate from the axioms in Gödel's original theorem, and I didn't add any extra ones.</p>
<p>Obviously, though, it is very unlikely that I've just proven Gödel's proof to be wrong, so my 'theorem' must be wrong. However, I've followed through each of the steps in my 'proof' many times over and didn't manage to find any deviation from Gödel's axioms either time. Can anyone help me with this?</p>
| Nagase | 117,698 | <p>Note that Theorem 1 of your link actually states: $P(\phi) \implies \Diamond \exists x \phi(x)$, i.e. if $\phi$ is a positive property, then possibly there is something that instantiates it. Given this, Gödel needs an explicit axiom stating that being god-like is a positive property (Axiom 3 in your link: $P(G)$). So, in order for <em>your</em> proof to go through by use of Theorem 1, you'll need an analogous Axiom 3$'$, stating that $A(x)$ is a positive property. But then: (i) you'll have introduced an extra axiom, extraneous to Gödel's own axioms, and (ii) this axiom is not very plausible. So there is no contradiction among Gödel's <em>own</em> axioms.</p>
|
65,480 | <p>The example question is </p>
<blockquote>
<p>Find the remainder when $8x^4+3x-1$ is divided by $2x^2+1$</p>
</blockquote>
<p>The answer did something like</p>
<p>$$8x^4+3x-1=(2x^2+1)(Ax^2+Bx+C)+(Dx+E)$$</p>
<p>Where $(Ax^2+Bx+C)$ is the Quotient and $(Dx+E)$ the remainder. I believe the degree of Quotient is derived from degree of $8x^4+3x-1$ - degree of divisor. But for remainder? Would it not be </p>
| Pierre-Yves Gaillard | 660 | <p>There is a simple closed formula for the remainder $R$ and the quotient $Q$ of the euclidean division of a polynomial $P$ by a nonzero polynomial $D$. Here $P,D,Q,R$ are in $\mathbb C[X]$. </p>
<p>For any complex number $a$, any nonnegative integer $k$, and any rational fraction $f(X)\in\mathbb C(X)$ defined at $a$, let $$T_a^k(f(X))$$ be the degree at most $k$ Taylor approximation of $f(X)$ at $X=a$. </p>
<p>We may assume
$$
D(X)=\big(X-a_1\big)^{m_1}\cdots\big(X-a_r\big)^{m_r},
$$
where the $a_j$ are distinct and the $m_j$ positive. Then we have
$$
R(X)=\sum_{j=1}^r\ T_{a_j}^{m_j-1}\left(P(X)\ \frac{(X-a_j)^{m_j}}{D(X)}\right)
\frac{D(X)}{(X-a_j)^{m_j}}\quad.
$$
If $m_j=1$ for all $j$, we get Lagrange's Interpolation Formula
$$
R(X)=\sum_{j=1}^r\ P(a_j)\ \prod_{k\not=j}\ \frac{X-a_k}{a_j-a_k}\quad.
$$
If $\deg P < \deg D$, then $Q=0$. Otherwise, putting $q:=\deg P-\deg D$ and $f:=P/D$, we have
$$
Q(X^{-1})=T_0^q\Big(f(X^{-1})X^q\Big)X^{-q}.
$$ </p>
|
3,460,426 | <p>I tried to take the <span class="math-container">$Log$</span> of <span class="math-container">$\prod _{m\ge 1} \frac{1+\exp(i2\pi \cdot3^{-m})}{2} = \prod _{m\ge 1} Z_m$</span>, which gives </p>
<p><span class="math-container">$$Log \prod_{m\ge 1} Z_m = \sum_{m \ge 1} Log (Z_m) = \sum_{m \ge 1} \ln |Z_m| + i \sum_{m \ge 1} \Theta_m,$$</span></p>
<p>where <span class="math-container">$\Theta_m$</span> is the principal argument of <span class="math-container">$Z_m$</span>.</p>
<p><span class="math-container">$|Z_m| = \left[\frac{1}{2} \left(1 + \cos(\frac{2\pi}{3^m}\right)\right]^{1/2}$</span>
has the range <span class="math-container">$[0,1]$</span>, so <span class="math-container">$\ln |Z_m| \le 0$</span>. And, since there are infinitely many <span class="math-container">$m$</span>'s sucu that <span class="math-container">$\ln|Z_m| \not = 0$</span>, <span class="math-container">$\sum_{m \ge 1} \ln |Z_m| \to -\infty$</span>.</p>
<p>Then, <span class="math-container">$$\exp\left({Log \prod_{m\ge 1} Z_m}\right) = \exp\left(\sum_{m \ge 1} \ln |Z_m| \right)\exp\left(i \sum_{m \ge 1} \Theta_m\right) = 0.$$</span></p>
<p>I want to show that <span class="math-container">$\prod _{m\ge 1} \frac{1+\exp(i2\pi \cdot3^{-m})}{2} $</span> is non-zero. What is wrong in the above reasonings?</p>
| Community | -1 | <p>Let <span class="math-container">$p_k=P(X=k)$</span>. If all children have equal probability of being chosen, then the probability of being in a family with <span class="math-container">$k$</span> children is <span class="math-container">$$\frac{kp_k}{\sum ip_i}= \frac{kp_k}{1.8}$$</span> and so the expected number of siblings is<span class="math-container">$$\frac{\sum (k-1)kp_k}{1.8}=\frac{0.36+1.8^2-1.8}{1.8}=1.$$</span></p>
|
1,456,444 | <p>How can I go about solving this Pigeonhole Principle problem? </p>
<p>So I think the possible numbers would be: $[3+12], [4+11], [5+10], [6+9], [7+8]$</p>
<p>I am trying to put this in words...</p>
| Zach466920 | 219,489 | <p>Denote the set $[3,4,5,6,7,8,9,10,11,12]$ by $S$. This set has $10$ elements. </p>
<p>As you correctly noted, this set can be split into another set $T$ with $5$ elements, $[(3,12);(4,11);(5,10);(6,9);(7,8)]$, such that the binary sum of each of the components for each element is $15$. </p>
<p>If you pick $6$ integers from the set $S$, and wish that none of these picked numbers from this set $W$ can be grouped into a set of pairs with an element whose sum is $15$, then one must pick $6$ of the components from the set $T$ such that none of the components are part of the same element.</p>
<p>The <a href="https://en.wikipedia.org/wiki/Pigeonhole_principle" rel="nofollow">Pigeon Hole Principle</a> states that if $n$ items are put into $m$ containers, with $n \gt m$, then at least one container must contain more than one item. Let the number of integers needed to be fit equal $n$. Let the number of unique elements of $T$ be denoted by $m$.
$$6 \gt 5$$
Therefore by the Pigeon Hole Principle, if one groups $W$ into arbitrary unique pairings, there will exist at least one grouping of $W$ into a set with pair whose sum is $15$.</p>
|
2,855,339 | <p>What would be the complement of...</p>
<p>$\{$x:x is a natural number divisible by 3 and 5$\}$</p>
<p>I checked it's solution and it kind of stumped me...</p>
<p>$\{$x:x is a positive integer which is not divisible by 3 <em>or</em> not divisible by 5$\}$</p>
<p>Why the word <em>or</em> has been used in the solution? Why not <em>and</em>?</p>
| Christian Blatter | 1,303 | <p>Define $e(t):= t \>{\rm mod}\>1$. The orbit ${\bf x}$ of the three hands is then given by
$${\bf x}:\quad t\mapsto\bigl(e(t),e(12t),e(720t)\bigr)\ .$$
Instead we look at the orbit ${\bf x}'=(x_1,x_2)$ of the second and third hands relative to the first hand, given by
$${\bf x}': \quad t\mapsto\bigl(e(11t),e(719t)\bigr)\ .$$
In the $(x_1,x_2)$-plane we draw the fundamental domain $R:=\bigl[-{1\over2},{1\over2}\bigr]\times\bigl[-{1\over2},{1\over2}\bigr]$. It is easily seen that the part $F:$ $|x_1-x_2|\leq{1\over2}$ of $R$ (a hexagon) contains the positions where the three hands can be covered by a half-disk. This part $F$ makes up ${3\over4}$of $R$.</p>
<p>If we now draw the relative orbit ${\bf x}'$ into $R$ then we see a "lattice" of $719$ parallel lines with slope ${719\over11}$ in $R$. The wandering point ${\bf x}'(t)$ has constant speed. It is then clear that ${\bf x}'(t)$ spends $\approx{3\over4}$ of its time in $F$. In order to obtain the exact (rational) value one would have to engage in cumbersome calculations, which I omit.</p>
|
1,303,772 | <blockquote>
<p>Show that $$-2 \le \cos \theta ~ (\sin \theta +\sqrt{\sin ^2 \theta +3})\le 2$$ for all value of $\theta$.</p>
</blockquote>
<p>Trial: I know that $0\le \sin^2 \theta \le1 $. So, I have $\sqrt3 \le \sqrt{\sin ^2 \theta +3} \le 2 $. After that I am unable to solve the problem. </p>
| lab bhattacharjee | 33,337 | <p>Let $\cos \theta ~ (\sin \theta +\sqrt{\sin ^2 \theta +3})=y$</p>
<p>$\iff \sin \theta +\sqrt{\sin ^2 \theta +3}=y\sec\theta$</p>
<p>$\iff \sqrt{\sin ^2 \theta +3}=y\sec\theta-\sin \theta$</p>
<p>Squaring we get $\sin ^2 \theta +3=y^2(1+\tan^2\theta)+\sin^2\theta-2y\tan\theta$</p>
<p>$\iff y^2(\tan^2\theta)-2y(\tan\theta)+y^2-3=0$</p>
<p>As $\tan\theta$ is real, the discriminant must be $\ge0\implies(y-2)(y+2)\le0\iff\cdots$</p>
|
1,960,911 | <p>I am trying to evaluate this limit for an assignment.
$$\lim_{x \to \infty} \sqrt{x^2-6x +1}-x$$</p>
<p>I have tried to rationalize the function:
$$=\lim_{x \to \infty} \frac{(\sqrt{x^2-6x +1}-x)(\sqrt{x^2-6x +1}+x)}{\sqrt{x^2-6x +1}+x}$$</p>
<p>$$=\lim_{x \to \infty} \frac{-6x+1}{\sqrt{x^2-6x +1}+x}$$</p>
<p>Then I multiply the function by $$\frac{(\frac{1}{x})}{(\frac{1}{x})}$$</p>
<p>Leading to </p>
<p>$$=\lim_{x \to \infty} \frac{-6+(\frac{1}{x})}{\sqrt{(\frac{-6}{x})+(\frac{1}{x^2})}+1}$$</p>
<p>Taking the limit, I see that all x terms tend to zero, leaving -6 as the answer. But -6 is not the answer. Why is that?</p>
| marwalix | 441 | <p>It leads to</p>
<p>$$=\lim_{x \to \infty} \frac{-6+(\frac{1}{x})}{\sqrt{1-(\frac{6}{x})+(\frac{1}{x^2})}+1}$$</p>
<p>And so the limit is $-3$</p>
|
2,361,336 | <p>Note: This is <strong>not a duplicate</strong> as I am asking for a proof, not a criteria, and this is a specific proof, not just any proof – <strong>please treat like any other question on a specific math problem.</strong> Please do not close. thanks!</p>
<p><a href="https://i.stack.imgur.com/5R2aE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5R2aE.png" alt="enter image description here"></a></p>
<p>I am having trouble proving the above as I don't know how to express the various cases/outcomes of <em>N</em> when 11 is added to <em>M</em>. Take for example N=<strong>9759</strong>, then M=9-7+5-9=0</p>
<p>However, M+11 could give many different numbers, depending on where and what integers are added.</p>
<p>So for M to become 0+11=11 in the above example,</p>
<p>(i) N=<strong>9757</strong>946 is one possibility</p>
<p>(ii) N=946<strong>9757</strong> is another possibility</p>
<p>Although they are essentially the same, mathematically they are different (I think?) because:</p>
<p>(i) M=<strong>9-7+5-7</strong>+9-4+6</p>
<p>(ii) M=9-4+6 <strong>-9+7-5+7</strong></p>
<p>so the (-1)^n coefficient changes for the digits 9, 7, 5, 7</p>
<hr>
<p>These proofs for divisibility by 3 may help:</p>
<p><a href="https://i.stack.imgur.com/yw1xT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yw1xT.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/bo0jy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bo0jy.png" alt="enter image description here"></a></p>
| Vassilis Markos | 460,287 | <p>An easy way to prove that $Q(x)=0$ is by induction on $n$:</p>
<ol>
<li>For $n=1$ we have that $Q(x)=a_0+a_1(x-x_0)$ and the wanted limit is:
$$\lim_{x\to x_0}\frac{a_0+a_1(x-x_0)}{x-x_0}=0$$
Let $$g(x)=\frac{Q(x)}{x-x_0}\Rightarrow Q(x)=g(x)(x-x_0)$$
Now, since $\lim\limits_{x\to x_0}g(x)=0$ and $Q$ is continuous as a polynomial, we have:
$$Q(x_0)=\lim_{x\to x_0}Q(x)=\lim_{x\to x_0}g(x)(x-x_0)=0\cdot0=0$$
So, $$Q(x_0)=0\Rightarrow a_0+a_1(x_0-x_0)=0\Rightarrow a_0=0$$
and our limit now is:
$$0=\lim_{x\to x_0}\frac{a_1(x-x_0)}{x-x_0}=a_1\lim_{x\to x_0)}\frac{x-x_0}{x-x_0}=a_1\cdot1=a_1\Rightarrow a_1=0$$
So, $$Q(x)\equiv0$$</li>
<li>Now, let us suppose that the requested is true for $n$, so that
$$\lim_{x\to x_0}\frac{P(x)}{(x-x_0)^n}=$$
for a polynomial $P$ of degree $\leq n$ implies that $P(x)\equiv0$. Let $$Q(x)=a_0+a_1(x-x_0)+\dots+a_{n+1}(x-x_0)^{n+1}$$
and let
$$\lim_{x\to x_0}\frac{Q(x)}{(x-x_0)^{n+1}}=0$$
We will at first show that $a_0=0$. Let $$g(x)=\frac{Q(x)}{(x-x_0)^{n+1}}\Rightarrow Q(x)=g(x)(x-x_0)^{n+1}$$
So, since $\lim\limits_{x\to x_0}g(x)=0$ and $Q$ is continuous, we have:
$$Q(x_0)=\lim_{x\to x_0}Q(x)=\lim_{x\to x_0}g(x)(x-x_0)^{n+1}=0$$
So, $Q(x_0)=0\Rightarrow a_0=0$
Then $$\begin{align*}Q(x)=&a_1(x-x_0)+\dots+a_{n+1}(x-x_0)^{n+1}=\\=&(x-x_0)\underbrace{\left(a_1+a_1(x-x_0)+\dots+a_{n+1}(x-x_0)^n\right)}_{P(x)}=\\
=&(x-x_0)P(x)
\end{align*}$$
where $P(x)$ is a polynomial of degree $\leq n$. Now, our limit is:
$$\lim_{x\to x_0}\frac{Q(x)}{(x-x_0)^{n+1}}=\lim_{x\to x_0}\frac{(x-x_0)P(x)}{(x-x_0)^{n+1}}=\lim_{x\to x_0}\frac{P(x)}{(x-x_0)^n}=0$$
but, due to our hypothesis, since $P$ is of degree $\leq n$, we have that:
$$P(x)\equiv0$$
So:
$$a_1=a_2=\dots=a_{n+1}=0$$
And the proof is now complete.</li>
</ol>
|
2,500,961 | <p>I've been able to find formulas all over the place for the sum and product of roots, but I haven't found anything that explains the significance of what they mean or how to interpret them to further gain understanding of the polynomial under evaluation. Is there any physical meaning? Do the values have any significance?</p>
<p>For example, I have a $4$<sup>th</sup> order complex polynomial in $ \mathbb{Z} $, for which I find the real part of the $4$ roots add up to $\frac{\pi}{2}$. I'm wondering what the significance of the sum being $\frac{\pi}{2}$ is? To me it's a "buzz" number.</p>
| Math Model | 440,850 | <p><a href="https://www.mathsisfun.com/algebra/polynomials-sums-products-roots.html" rel="nofollow noreferrer">https://www.mathsisfun.com/algebra/polynomials-sums-products-roots.html</a></p>
<p>I had to google it, but I would check out that link. It does give you information about coefficients in the polynomial, which I think kind of makes sense, because you make a polynomial by multiplying linear factors together (assuming that a polynomial is reducible in your chosen field, but in the complex numbers, you don't have to worry about that). </p>
|
1,966,122 | <p>$$\sum_{k=1}^{2n}\frac{(-1)^{k+1}}{k} = \sum_{k=n+1}^{2n} \frac{1}{k}$$</p>
<p>I am trying to prove this inductively, so I thought that I would expand the right side out of sigma form to get</p>
<p>$$\sum_{k=1}^{2n}\frac{(-1)^{k+1}}{k} = \frac{2}{2n(2n+1)} - \frac{1}{n}$$</p>
<p>which simplified to</p>
<p>$$\sum_{k=1}^{2n}\frac{(-1)^{k+1}}{k} = \frac{-2}{2n+1}$$</p>
<p>but apparently that isn't correct, can someone provide some insight into what I am doing wrong?</p>
| Kat | 297,952 | <p>Note that: \begin{pmatrix}
1 & 2 & -2\\
2 & 1 & 1
\end{pmatrix} is the matrix $|f|_{BE}$ where B is the given basis and E is the standard basis for $\mathbb R^2$. Now recall that for two given bases, we have the respective change of basis matrices. Having this in mind, what you can use to get $|f|_{BD}$ and $|f|_{CD}$ is simply use matrix multiplication like so:</p>
<p>$$|f|_{BD}=C_{ED}|f|_{BE}C_{DE}=C_{DE}^{-1}|f|_{BE}C_{DE}$$
where $C_{ED}$ and $C_{DE}$ are the change of basis matrices for E to D and D to E respectively. The same idea can be used to calculate $|f|_{CD}$</p>
<p>I hope this helps you!</p>
|
3,888,259 | <p>The special linear group of invertible matrices is defined as the kernel of the determinant of the map:</p>
<p><span class="math-container">$$\det:GL(n,\mathbb{R}) \mapsto \mathbb{R}^*$$</span></p>
<p>In my mind the kernel of a linear map is the set of vectors that are mapped to the zero vector. So the map above would contain all the matrices that have determinant zero (which doesn't make sense since the codomain of the function excludes zero)? But isn't the special linear group made of matrices with determinant 1?</p>
| Mummy the turkey | 801,393 | <p><span class="math-container">$\mathbb{R}^∗$</span> is the multiplicative group, so the identity is <span class="math-container">$1$</span> - in particular the kernel is the elements of the group homomorphism sent to it.</p>
<p>NB: the determinant map is not linear and indeed the group operation on <span class="math-container">$GL_2(\mathbb{R})$</span> is not abelian so it's clearly not a vector space in a naive way.</p>
|
253,966 | <p>Just took my final exam and I wanted to see if I answered this correctly:</p>
<p>If $A$ is a Abelian group generated by $\left\{x,y,z\right\}$ and $\left\{x,y,z\right\}$
have the following relations:</p>
<p>$7x +5y +2z=0; \;\;\;\; 3x +3y =0; \;\;\;\; 13x +11y +2z=0$</p>
<p>does it follow that $A \cong Z_{3} \times Z_{3} \times Z_{6}$ ?</p>
<p>I know if we set $x=(1,0,2)$, $y=(0,1,0)$ and $z=(2,1,5)$ then this is consistent with the relations and with $A \cong Z_{3} \times Z_{3} \times Z_{6}$ </p>
| Hagen von Eitzen | 39,174 | <p>The trivial counterexample is that the trivial group is generated by $x=0, y=0, z=0$ and of course the given relations hold with $x=y=z=0$.
(Note that noone said that the set $\{x,y,z\}$ has cardinality $3$).</p>
<p>If you should insist on $x,y,z$ being distinct, observe that <em>any</em> quotient of $Z_3\times Z_3\times Z_6$ will preserve the relations. For example the projection to the first two factors $Z_3\times Z_3$ maps to the three distinct elements $(1,0), (0,1), (2,2)$.</p>
|
402,802 | <p>I have read that $$y=\lvert\sin x\rvert+ \lvert\cos x\rvert $$ is periodic with fundamental period $\frac{\pi}{2}$.</p>
<p>But <a href="http://www.wolframalpha.com/input/?i=y%3D%7Csinx%7C%2B%7Ccosx%7C" rel="nofollow">Wolfram</a> says it is periodic with period $\pi$.</p>
<p>Please tell what is correct.</p>
| Mark McClure | 21,361 | <p>The results coming from <em>any</em> software should be checked and considered from multiple angles. Part of the reason the graph is provided is precisely to help you do that. In this case, the graph makes it crystal clear that the result is twice the smallest period.</p>
<p><img src="https://i.stack.imgur.com/KiP15.png" alt="enter image description here"></p>
<p>This is a bug and you can feel free to report it as such using the "Give us your feedback" window at the bottom of the page.</p>
<p>Ultimately, the problem is the following Mathematica computation:</p>
<pre><code>Periodic`PeriodicFunctionPeriod[Abs[Sin[x]] + Abs[Cos[x]], x]
(* Out: Pi *)
</code></pre>
|
481,834 | <p>Let $A=(A_{ij})$ be a square matrix of order $n$. Verify that the determinant of the matrix</p>
<p>$\left( \begin{array}{ccc}
a_{11}+x & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22}+x & \cdots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1} & a_{n2} & \cdots & a_{nn}+x \end{array} \right)$,</p>
<p>can be represented as the polynomial $x^n+a_1x^{n-1}+a_2x^{n-2}+\cdots+a_{n-1}x+a_n$, where each coefficient $a_k$ is the sum of the minors of order $k$ of the matrix $A$.</p>
<p>I tried to use the definition of determinant by cofactor expansion but it's very long, I was wondering if there's a shorter way to show this. </p>
| Marc van Leeuwen | 18,880 | <p>Write each column$~j$ of $A+xI_n$ as a sum of the column$~j$ of$~A$ and $x$ times column $j$ of$~I_n$. Now apply multi-linearity of the determinant with respect to the columns for each of the columns, to obtain a sum of $2^n$ determinants (each column was a sum of $2$ terms, and doubled the number of terms obtained for the whole determinant). The sum can be indexed by the $2^n$ subsets of $\{1,2,\ldots,\}$, namely the subset of the columns for which the term involving $x$ was chosen. One obtains
$$
\det(A+xI_n) = \sum_{S\subseteq\{1,2,\ldots,\}}x^{|S|}\det M(S,A)
$$
where $M(S,A)$ denotes the matrix obtained from$~A$ by replacing the columns whose index appears in$~S$ by the corresponding column of$~I_n$.</p>
<p>Now fixing $S$, the determinant of $M(S,A)$ can be successively developed by the columns selected by$~S$, those that are taken from$~I_n$, which development involves only a single nonzero term each time. What remains is the determinant of the matrix obtained from $A$ by removing the rows and the columns whose index lies in$~S$. This is a <a href="https://en.wikipedia.org/wiki/Minor_%28linear_algebra%29#Other_applications" rel="nofollow"><em>principal</em> minor</a> of order $n-|S|$. Thus after collecting the terms with the same power of$~x$, the coefficient$~a_k$ of $x^{n-k}$ is the sum of all principal minors of order$~k$ of$~A$. With that correction, the statement given has been proved.</p>
|
397,347 | <p>I'm trying to figure out how to evaluate the following:
$$
J=\int_{0}^{\infty}\frac{x^3}{e^x-1}\ln(e^x - 1)\,dx
$$
I'm tried considering $I(s) = \int_{0}^{\infty}\frac{x^3}{(e^x-1)^s}\,dx\implies J=-I'(1)$, but I couldn't figure out what $I(s)$ was. My other idea was contour integration, but I'm not sure how to deal with the logarithm. Mathematica says that $J\approx24.307$. </p>
<p>I've asked a <a href="https://math.stackexchange.com/questions/339711/find-the-value-of-j-int-0-infty-fracx3ex-1-lnx-dx">similar question</a> and the answer involved $\zeta(s)$ so I suspect that this one will as well. </p>
| Mhenni Benghorbal | 35,472 | <p>Using the change of variables $ u=e^{-x} $, we have </p>
<blockquote>
<p>$$\int_{0}^{\infty}\frac{x^3}{e^x-1}\ln(e^x - 1)\,dx = \int _{0}^{1}\!{\frac { \left( \ln \left( u \right) \right) ^{3}
\ln \left( 1-u \right) }{u-1
}}{du}- \int _{0}^{1}\!{\frac { \left( \ln \left( u \right) \right)^{4} }{u-1
}}{du}. $$</p>
</blockquote>
<p>Now, just apply the technique which has been used to find the exact solution in this <a href="https://math.stackexchange.com/questions/290250/a-nice-log-trig-integral/291975#291975">problem</a> and the result will follow.</p>
|
76,778 | <p>I'm reading Yao's unpredictability -> pseudorandomness construction
and Goldreich/levin's pseudorandom permutation -> pseudorandom generator construction.</p>
<p>My question is:</p>
<p>is there a direct way to show that:</p>
<p>given a pseudorandom function, we can construct a pseudorandom permutation out of it?</p>
<p>[or is this question open]</p>
<p>Thanks!</p>
| Igor Rivin | 11,142 | <p>To expand very slightly upon @Steve's words of wisdom, see
<a href="http://en.wikipedia.org/wiki/Feistel_cipher" rel="nofollow">http://en.wikipedia.org/wiki/Feistel_cipher</a></p>
|
1,984,843 | <p>if $\cup$ is finite, say $n$, I came up with formula</p>
<p>$f(x) = n x + i$, where $x \in [\frac{i}{n}, \frac{i+1}{n}]$, $n$ is non negative integer and $i$ differs between $0$ and $n-1$.<br><br></p>
<p>I'm not sure whether it's correct to assume the bijection holds if $n$ approaches infinity.</p>
| mfl | 148,513 | <p><strong>Hint</strong></p>
<p>First of all define $f(1/n)=n-1,\forall n\in\mathbb{N}.$ Thus we have covered the extremes of the union of intervals.</p>
<p>Now, define $f$ on $(1/(n+1),1/n)$ to be a bijection between $(1/(n+1),1/n)$ and $(2n-2,2n-1).$</p>
|
2,916,246 | <p>I've found this to be difficult to solve:</p>
<p>$$ \frac{d^2 x }{dt^2} + (a x + b) \frac{dx}{dt} = 0 $$</p>
<p>I've done some reading, and I guess I could write this as:</p>
<p>$$ \frac{d^2 x }{dt^2} + b \frac{dx}{dt} + ax \frac{dx}{dt} = 0 $$</p>
<p>If I then treat $v(x) = \frac{dx}{dt}$ as an independent variable, I would get:</p>
<p>$$ \frac{dv}{dt} + bv + axv = 0 $$</p>
<p>This is sort of like a nonhomogenous equation. If I take the homogenous solution, I would get:</p>
<p>$$ v(t) = A e^{-bt}$$</p>
<p>I think.... I'm not sure where to go from here though. </p>
| Chinny84 | 92,628 | <p>We know the solution you have is wrong by putting it back in
$$
v' = -bv
$$
so we have in your original equation
$$
-bv + bv + axv = axv = 0
$$
this is in general not true.</p>
<p>Your issue was not converting the $x$ in terms of $v$. </p>
<p>To give a hint.
$$
x'' = \frac{dx}{dt}\frac{dv}{dx}
$$
this leads to
$$
\frac{dx}{dt}\frac{dv}{dx} + b\frac{dx}{dt} + ax\frac{dx}{dt} = \left(\frac{dv}{dx} + b + ax\right)\frac{dx}{dt} = 0
$$
In general $x' \neq 0$, so we can try to solve
$$
\frac{dv}{dx} + b + ax = 0
$$
This is a first order ode with the correct variables to solve nicely.</p>
|
2,916,246 | <p>I've found this to be difficult to solve:</p>
<p>$$ \frac{d^2 x }{dt^2} + (a x + b) \frac{dx}{dt} = 0 $$</p>
<p>I've done some reading, and I guess I could write this as:</p>
<p>$$ \frac{d^2 x }{dt^2} + b \frac{dx}{dt} + ax \frac{dx}{dt} = 0 $$</p>
<p>If I then treat $v(x) = \frac{dx}{dt}$ as an independent variable, I would get:</p>
<p>$$ \frac{dv}{dt} + bv + axv = 0 $$</p>
<p>This is sort of like a nonhomogenous equation. If I take the homogenous solution, I would get:</p>
<p>$$ v(t) = A e^{-bt}$$</p>
<p>I think.... I'm not sure where to go from here though. </p>
| Donald Splutterwit | 404,247 | <p>The substitution $p=\frac{dx}{dt}$, so $\frac{d^2x}{dt^2}=p \frac{dp}{dx}$ and this gives
\begin{eqnarray*}
dp = -(ax+b) dx \\
\frac{dx}{dt} = -ax^2/2-bx+c.
\end{eqnarray*}
To integrate further depends upon the discriminant of the quadratic in $x$.</p>
|
3,637,283 | <p>How would I find the fourth roots of <span class="math-container">$-81i$</span> in the complex numbers? </p>
<p>Here is what I currently have: </p>
<p><span class="math-container">$w = -81i$</span> </p>
<p><span class="math-container">$r = 9$</span> </p>
<p><span class="math-container">$\theta = \arctan (-81)$</span>? </p>
<p>Although I am not sure it's correct or if I am on the right track. May I have some help please? </p>
| bjcolby15 | 122,251 | <p>Hints:</p>
<p>1) Rewrite <span class="math-container">$-81i$</span> as <span class="math-container">$0-81i$</span>.</p>
<p>2) Find the modulus <span class="math-container">$|z|$</span> and argument of <span class="math-container">$0-81i$</span>, using the formulas <span class="math-container">$$|z| = \sqrt {a^2 + b^2}$$</span> and <span class="math-container">$$\arg \theta = \arctan \dfrac {b}{a}$$</span> </p>
<p>3) Use Demoivre's formula <span class="math-container">$z^{1/n} = |z|^{1/n} \left(\cos \dfrac {\theta k}{n} + i \sin \dfrac {\theta k}{n}\right)$</span>, <span class="math-container">$k \in (0,3)$</span> with the values you've found.</p>
|
2,638,028 | <p><strong>Question:</strong></p>
<blockquote>
<p>If $p,q$ are positive integers, $f$ is a function defined for positive numbers and attains only positive values such that $f(xf(y))=x^py^q$, then prove that $p^2=q$.</p>
</blockquote>
<p><strong>My solution:</strong></p>
<p>Put $x=1$. So, $f(f(y))=y^q$, then evidently, $f(y)=y^{\sqrt{q}}...(1)$ satisfies this.<br>
Now, put $x=y=1$, to get $f(1)=1$<br>
Now, put $y=1$. So, $f(x)=x^p...(2)$</p>
<p>Equalising $f(a)$ for an arbitrary constant $a$ from the two equations $(1)$ and $(2)$, we get: $a^{\sqrt{q}}=a^p$ or $p^2=q$. $\blacksquare$</p>
<hr>
<p>Is this solution correct? I am particularly worried because I have solved this six marks question in a four line solution which wouldn't make my prof very happy...</p>
| Community | -1 | <p>Your solution is not conclusive, since $f(f(y))=y^q$ has infinitely many solutions. But if you want to make that professor unhappy with a short solution, you can do that: replacing $x$ by $f(x)$ in your equation, you get
$$f(f(x)f(y))=f(x)^py^q.$$ The LHS is symmetric in $x,y$, so you must have also $$f(x)^py^q=f(y)^px^q,$$ meaning<br>
$$f(x)^px^{-q}=f(y)^py^{-q}.$$ So that must be a constant, and you arrive at
$$f(x)=c\,x^{q/p}.$$ If you plug this into your original equation, you obtain the two conditions $c^{1+q/p}=1$ and $q/p=p$, i.e. $p^2=q$ and $c=1$. </p>
|
1,507,710 | <p>I'm trying to get my head around group theory as I've never studied it before.
As far as the general linear group, I think I've ascertained that it's a group of matrices and so the 4 axioms hold?
The question I'm trying to figure out is why $(GL_n(\mathbb{Z}),\cdot)$ does not form a group.
I think I read somewhere that it's because it doesn't have an inverse and I understand why this would not be a group, but I don't understand why it wouldn't have an inverse. </p>
| Bernard | 202,857 | <p><span class="math-container">$\DeclareMathOperator{\GL}{GL}\GL_n(\mathbf Z)$</span> <em>is</em> a multiplicative group, by definition: it is the set of <em>invertible</em> matrices with coefficients in <span class="math-container">$\mathbf Z$</span>. </p>
<p>The problem is that it's not what you seem to think – the set of matrices with a non-zero determinant. In general, for any (commutative) ring <span class="math-container">$A$</span>, <span class="math-container">$\GL_n(A)$</span> is the set
/group of invertible matrices, i. e. the matrices with determinant <em>invertible</em> in <span class="math-container">$A$</span>.</p>
<p>In the case <span class="math-container">$A=\mathbf Z$</span>, this means the matrix has determinant <span class="math-container">$\pm1$</span>.</p>
|
1,982,102 | <p>If I wanted to figure out for example, how many tutorial exercises I completed today.</p>
<p>And the first question I do is <strong>question $45$</strong>, </p>
<p>And the last question I do is <strong>question $55$</strong></p>
<p>If I do $55-45$ I get $10$.</p>
<p>But I have actually done $11$ questions:<br>
$1=45$, $2=46$, $3=47$, $4=48$, $5=49$, $6=50$, $7=51$, $8=52$, $9=53$, $10=54$, $11=55$.</p>
<p>Is there any way to know when I can just subtract. Or is the rule I always have to add $1$ when I subtract?</p>
| CiaPan | 152,299 | <p>If you started at the question 15 and finished at the question 15; how many question have you answered?</p>
<p>Imagine a list of exercises to be done in order. Some of them are marked as done already. You start from the first unmarked question.</p>
<blockquote>
<p>Every time you complete an excercise you mark it, thus increasing the number of the first excercise waiting to be done.</p>
</blockquote>
<p>This way the <em>number of excercises done</em> in some time is an <em>incremet of a number of the first excercise not done yet</em>.<br>
Since you started with question 45 (so 45 was the first question NOT done then) and you stopped after question 55 (so the first question NOT done yet is $56 = 55+1$ now), you have answered $(55+1)-45=11$ questions.</p>
<p>Put it another way:</p>
<p>when you started your work, the last question answered was number $44 = 45-1$, and now it is $55$; the number of questions you answered today is an increment of the number of questions answered: $55 - (45-1) = 11$.</p>
|
1,439,004 | <p>I am trying to come up with a counting argument for: $\sum_{k=1}^{n}q^{k-1} = \frac{q^n-1}{q-1}$. I am trying to base it off of counting the left side as the sum of the (k-1) length words from an alphabet of size q for $k=1$ to $k=n-1$, but I can't seem to come up with a fitting argument to count the right side of the equation.</p>
| Brian M. Scott | 12,042 | <p>HINT: Fix a letter $a$ of your alphabet; there are $q^n-1$ words of length $n$ that contain at least one letter different from $a$.</p>
<p>Now count the same set of words according to the position of the last non-$a$ letter. If this is position $k$, $k$ can have any value from $1$ through $n$; how many words of length $n$ have the last non-$a$ letter in position $k$?</p>
|
73,991 | <p>I have the axiom from Peano's axioms:</p>
<p>If $A\subseteq \mathbb{N}$ and $1\in A$ and $m\in A \Rightarrow S(m)\in A$, then $A=\mathbb{N}$.</p>
<p>My book tells me that it secures that there are no more natural numbers than the numbers produced by the below 3 axioms (also from Peano's axioms):</p>
<p>$1\in \mathbb{N}$</p>
<p>For every $n\in\mathbb{N}: 1\neq S(n)$</p>
<p>For every $m,n\in \mathbb{N}:m\neq n\Rightarrow S(m) \neq S(n)$</p>
<p>And I'm not sure why? Is there someone who can explain this?</p>
<p>S(n) is an unary function $S: \mathbb N \rightarrow \mathbb N$. Does this means that $S(n)=n+1$?</p>
| hmakholm left over Monica | 14,366 | <p>Yes, $S(n)$ is intended to represent $n+1$. Later, when addition is defined, "$n+1$" will turn out to <em>mean</em> $S(n)$.</p>
<p>As for the induction: Let
$$A=\{1,S(1),S(S(1)), S(S(S(1))),\ldots\}$$
This is clearly a subset of $\mathbb{N}$ as defined by the axioms; it satisfies $1\in A$ and for every $n\in A$ it must also be that $S(n)\in A$. Therefore, by the induction axiom, $A=\mathbb N$, so
$$\mathbb N=\{1,S(1),S(S(1)), S(S(S(1)),\ldots\}$$</p>
<p>Strictly speaking the above argument is not quite formal, because formulas that involve "$\ldots$" are usually taken to be abbreviations for more involved constructions that involve the natural numbers. However, as our main aim here is to define the natural numbers formally in the first place, it is not clear that the above argument has any formal content at all. (It does, however: It says that whenever we have a <em>model</em> of the (second-order) Peano axioms, the model has to be isomorphic to the natural numbers we use at the metalevel to give meaning to the "$\ldots$").</p>
|
4,251,233 | <p>Find</p>
<p><span class="math-container">$\int\frac{x+1}{x^2+x+1}dx$</span></p>
<p><span class="math-container">$\int \frac{x+1dx}{x^2+x+1}=\int \frac{x+1}{(x+\frac{1}{2})^2+\frac{3}{4}}dx$</span></p>
<p>From here I don't know what to do.Write <span class="math-container">$(x+1)$</span> = <span class="math-container">$t$</span>?</p>
<p>This does not work.Use partial integration?I don't think it will work here.</p>
<p>And I should complete square then find.</p>
| Adam Rubinson | 29,156 | <p>Write <span class="math-container">$x+\frac{1}{2} = t.$</span> Then,</p>
<p><span class="math-container">$$\frac{x+1}{\left(x+\frac{1}{2}\right)^2+\frac{3}{4}} = \frac{x+\frac12 + \frac12}{\left(x+\frac{1}{2}\right)^2+\frac{3}{4}} = \frac{t}{t^2+\frac{3}{4}} + \frac{\frac12}{t^2+\frac{3}{4}}.$$</span></p>
<p>Can you integrate these two terms now?</p>
|
887,473 | <p>I have been struggling with the following claim:</p>
<p>Let $A_n$ be a sequence of compact sets and $A$ a compact set. $A=\lim\sup_n A_n=\lim\inf_n A_n$ iff $d_H(A_n,A)\to 0$ where $d_H(.,.)$ is the Hausdorff metric.</p>
<p>$\lim\inf$ and $\lim\sup$ are defined by $\lim\inf_nA_n=\left\{y\in Y:\forall \varepsilon>0,\exists N, n\geq N\quad \implies B_{\varepsilon}(y)\cap A_n\neq\emptyset\quad \right\}$ and, $\lim\sup_n A_n=\left\{y\in Y: \forall\varepsilon>0,\forall N,\exists n\geq N \text{ so that } B_\epsilon(y)\cap A_n\neq\emptyset \right\}$. </p>
<p>Also $d_H (X,Y)=\max\left(\inf\left\{\epsilon>0:Y\subset\bigcup_{x\in X}B_\epsilon(x) \right\}, \inf\left\{\epsilon>0:X\subset\bigcup_{y\in Y}B_\epsilon(y) \right\} \right)$.</p>
<p>$\Rightarrow$: Let $\varepsilon>0$ be given. For each $a\in A$ $\exists N(a,\varepsilon)$ so that $n\geq N$ implies $B_\varepsilon(a)\cap A_n\neq\emptyset$. By compactness of $A$ we can find finite $a_1,a_2\dots a_k$ and associated $N_1,N_2\dots N_k$ so that $A\subset\cup_{i=1}^k B_\varepsilon(a_i)$. Let $N=\max_{1\leq i\leq k}N_i$. I want to show $n\geq N$ implies $A_n\subset A^\varepsilon$ but couldn't manage. </p>
<p>$\Leftarrow$: I want to show $A\subset\lim\inf A_n\subset\lim\sup A_n\subset A$. Let $\varepsilon>0$ be given. There exists $N$ such that $d_H(A_n,A)<\varepsilon$ for $n$ large enough. Pick $a\in A$, so $a\in A_n^{\varepsilon}$ which implies $B_{\varepsilon}(a)\cap A_n\neq\emptyset$ for all such large $n$'s. Thus $a\in\lim\inf A_n$. To finish this side, I need to show $\lim\sup A_n\subset A$ yet again couldn't find the solution. Thanks for any help!</p>
| Daniel Fischer | 83,702 | <p>For the direction</p>
<p>$$A = \limsup_n A_n = \liminf_n A_n \implies d_H(A_n,A) \to 0,$$</p>
<p>you need some additional hypothesis on the ambient space, namely that it is compact.</p>
<p>Without that hypothesis, the implication does not hold. An explicit counterexample using compact subsets of $\mathbb{R}$ is $A = [0,1]$, and $A_n = [0,1] \cup \{n\}$ for $n\in \mathbb{N}$. Generally, if the ambient (metric) space is not compact, it contains a sequence $(x_n)_{n\in \mathbb{N}}$ without accumulation point, and then $A = \{x_0\}$ and $A_n = \{x_0,x_n\}$ is a counterexample.</p>
<p>So let's suppose that the ambient space is compact.</p>
<p>What you have so far proves that for large enough $n$, you have $A \subset A_n^{2\varepsilon}$. For the converse inclusion, that $A_n \subset A^\varepsilon$ for all large enough $n$, you use the compactness of the ambient space. Suppose $A_n \setminus A^\varepsilon \neq\varnothing$ for infinitely many $n$. Then</p>
<p>$$F_n = \overline{\bigcup_{k=n}^\infty A_k\setminus A^\varepsilon}$$</p>
<p>is a decreasing sequence of nonempty compact sets, hence $F = \bigcap\limits_{n\in\mathbb{N}} F_n \neq \varnothing$. Now, what can be said about the points in $F$?</p>
<p>For the other direction,</p>
<p>$$d_H(A_n,A) \to 0 \implies A = \limsup_n A_n = \liminf_n A_n,$$</p>
<p>you have correctly shown that $A \subset \liminf_n A_n$. Now let $x\in \limsup_n A_n$. Fix $\varepsilon > 0$. Choose $N$ such that $d_H(A_n,A) < \varepsilon/2$ for $n \geqslant N$. Since $x \in \limsup_n A_n$, there is an $n_\varepsilon > N$ such that $A_{n_\varepsilon} \cap B_{\varepsilon/2}(x) \neq\varnothing$. But, for $y \in A_{n_\varepsilon}$, we also have $B_{\varepsilon/2}(y) \cap A \neq \varnothing$, and hence $B_\varepsilon(x) \cap A \neq \varnothing$. That means $x \in A^\varepsilon$. Since $\varepsilon$ was arbitrary, it follows that</p>
<p>$$x \in \bigcap_{\varepsilon > 0} A^\varepsilon = A.$$</p>
|
458 | <p>If you go to the bottom of any page in the SE network (e.g. this one!), you'll see a list of SE sites. In particular there's a link to MathOverflow, that is potentially seen by a large number of people (many of whom are outside of our target audience).</p>
<p>When you put your cursor over that link, there's a hover popup reading "mathematicians". If you try this with many of the other sites you'll find more a more detailed description.</p>
<p>We should improve this!</p>
<blockquote>
<p>I'll provide a few samples as answers; please vote for the one you like, and we'll get it fixed.</p>
</blockquote>
| Scott Morrison | 3 | <p>Research mathematics (at graduate level and above)</p>
|
713,098 | <p>The answer to my question might be obvious to you, but I have difficulty with it. </p>
<p>Which equations are correct:</p>
<p>$\sqrt{9} = 3$</p>
<p>$\sqrt{9} = \pm3$</p>
<p>$\sqrt{x^2} = |x|$</p>
<p>$\sqrt{x^2} = \pm x$</p>
<p>I'm confused. When it's right to take an absolute value? When do we have only one value and why? When two and why? </p>
<p>Thank you very much in advance for your help!</p>
| hmakholm left over Monica | 14,366 | <p><em>By definition</em> the square root of a number is the <em>positive</em> number whose square is the original number. So we have $\sqrt9=3$ and $\sqrt{x^2}=|x|$ and no doubt about either.</p>
<p>There <em>is no number</em> whose square root is $-3$ (even if we move to complex numbers and consider principal square roots).</p>
<p>What can create confusion is that we sometimes have an equation such as
$$ x^2 = 9 $$
and say something like "now let's take the square root on both sides" to get
$$ x = \pm 3 $$
which can look like we're saying taking the square root of $9$ gives $\pm 3$. But what really happens is that the square roots give us
$$ |x| = 3 $$
and then there's an implicit invisible step that replaces the absolute value sign with a $\pm$ to get $x=\pm 3$ instead.</p>
|
2,715,374 | <p>We know that \begin{equation*}
a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{a_3+\cfrac{1}{\ddots+\cfrac{1}{a_n}}}}}=[a_0,a_1, \cdots, a_n]
\end{equation*}</p>
<p>If $\frac{p_n}{q_n}=[a_0,a_1, \cdots, a_n]$.</p>
<blockquote>
<p>How to prove that $$
\begin{pmatrix}
p_n & p_{n-1} \\
q_n & q_{n-1} \\
\end{pmatrix}=\begin{pmatrix}
a_0 &1 \\
1 & 0 \\
\end{pmatrix}\begin{pmatrix}
a_1 &1 \\
1 & 0 \\
\end{pmatrix}\cdots\begin{pmatrix}
a_n &1 \\
1 & 0 \\
\end{pmatrix}
$$.</p>
</blockquote>
<p>I am getting the answer while checking with $n=0,1,2,3$. I think that it could be done by induction but after assuming $k=n-1$ when I am going to prove $k=n$ the calculation is getting messy. Please help me out in proving this.</p>
| Ethan Bolker | 72,858 | <p>The first line in your extended (non)proof is wrong when $x=n$. So it's not true for all $x$ and $n$.</p>
|
2,715,374 | <p>We know that \begin{equation*}
a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{a_3+\cfrac{1}{\ddots+\cfrac{1}{a_n}}}}}=[a_0,a_1, \cdots, a_n]
\end{equation*}</p>
<p>If $\frac{p_n}{q_n}=[a_0,a_1, \cdots, a_n]$.</p>
<blockquote>
<p>How to prove that $$
\begin{pmatrix}
p_n & p_{n-1} \\
q_n & q_{n-1} \\
\end{pmatrix}=\begin{pmatrix}
a_0 &1 \\
1 & 0 \\
\end{pmatrix}\begin{pmatrix}
a_1 &1 \\
1 & 0 \\
\end{pmatrix}\cdots\begin{pmatrix}
a_n &1 \\
1 & 0 \\
\end{pmatrix}
$$.</p>
</blockquote>
<p>I am getting the answer while checking with $n=0,1,2,3$. I think that it could be done by induction but after assuming $k=n-1$ when I am going to prove $k=n$ the calculation is getting messy. Please help me out in proving this.</p>
| Eric Wofsey | 86,856 | <p>You can't multiply inequations like this: $a\neq b$ does not imply $ac\neq bc$. Indeed, if $c=0$, then $ac=0=bc$ is always true, even if $a\neq b$.</p>
<p>(If you know that $c\neq 0$, then this step would be valid, since if $ac$ were equal to $bc$ you could divide both by $c$ to get $a=b$ which contradicts the fact that $a\neq b$. So, in a sense, this is indeed a division by $0$ error.)</p>
|
2,953,371 | <p>How can I find the derivative of this function ?
<span class="math-container">$$f(x)= (4x^2 + 2x +5)^{0.5}$$</span></p>
| Toffomat | 380,397 | <p>It helps to look at the definition of a convolution: Given two functions <span class="math-container">$f$</span> and <span class="math-container">$g$</span> (assumed from <span class="math-container">$\mathbb R$</span> to <span class="math-container">$\mathbb R$</span>), the convolution <span class="math-container">$f*g$</span> is defined as
<span class="math-container">$$(f*g) (t) = \int\limits_{-\infty}^\infty f(t-y) g(y) \,\text d y\,.$$</span></p>
<p>In your case, <span class="math-container">$f(t)=\delta(3-t)$</span> and <span class="math-container">$g(t)=\delta(t-2)$</span>, so we have
<span class="math-container">$$(f*g) (t) =\int \delta(3-(t-y)) \,\delta(y-2) \,\text{d} y\,.$$</span>
(You can switch which function you call <span class="math-container">$f$</span> and which one <span class="math-container">$g$</span> without changing the result, since the convolution is symmetric.) , Now the essential property of the <span class="math-container">$\delta$</span> funcion is that <span class="math-container">$\int f(y) \delta(y-a)\,\text d y=f(a)$</span>, so the result is
<span class="math-container">$$(f*g) (t) =\int \delta(3-(t-y)) \,\delta(y-2) \,\text{d} y =\delta(3-t+y)\big|_{y=2}=\delta(5-t)\,.$$</span></p>
|
676,573 | <p>Exercise: Write the polynomial $1 + 2x -x^2 + 5x^3 - x^4$ at powers of $(x-1)$.</p>
<p>I presume this exercise is solved using Taylor Series, since it belongs to that chapter, but have no idea how to solve it. Otherwise, it's very straightforward.</p>
<p>Note: The above exercise is <strong>not</strong> homework.</p>
| Ant | 66,711 | <p>Another way to do that is to do just what you would do with any other function:</p>
<p>Calculate $$p(1), p'(1), p''(1)...$$ (strait-forward since $p$ is a polynomial)</p>
<p>and use the derivatives to build the appropriate taylor series around $x_0 = 1$</p>
<p>Indeed, it is considerably faster (in this case) than the method proposed by Mark Bennet </p>
|
3,516,241 | <p>Consider the equation:</p>
<p><span class="math-container">$$ x ^ 4 - (2m - 1) x^ 2 + 4m -5 = 0 $$</span></p>
<p>with <span class="math-container">$m \in \mathbb{R}$</span>. I have to find the values of <span class="math-container">$m$</span> such that the given equation has all of its roots real.</p>
<p>This is what I did:</p>
<p>Let <span class="math-container">$ u = x^2, \hspace{.25cm} u\ge 0$</span></p>
<p>We get:</p>
<p><span class="math-container">$$ u ^ 2 - (2m - 1)u + 4m -5 = 0 $$</span></p>
<p>Now since we have </p>
<p><span class="math-container">$$ u = x ^ 2$$</span></p>
<p>That means</p>
<p><span class="math-container">$$x = \pm \sqrt{u}$$</span></p>
<p>That means that the roots <span class="math-container">$x$</span> are real only if <span class="math-container">$u \ge 0$</span>.</p>
<p>So we need to find the values of <span class="math-container">$m$</span> such that all <span class="math-container">$u$</span>'s are <span class="math-container">$\ge 0$</span>. If all <span class="math-container">$u$</span>'s are <span class="math-container">$\ge 0$</span>, that means that the sum of <span class="math-container">$u$</span>'s is <span class="math-container">$\ge 0$</span> <strong>and</strong> the product of <span class="math-container">$u$</span>'s is <span class="math-container">$ \ge 0 $</span>. Using Vieta's formulas</p>
<p><span class="math-container">$$S = u_1 + u_2 = - \dfrac{b}{a} \hspace{2cm} P = u_1 \cdot u_2 = \dfrac{c}{a}$$</span></p>
<p>where <span class="math-container">$a, b$</span> and <span class="math-container">$c$</span> are the coefficients of the quadratic, we can solve for <span class="math-container">$m$</span>. We get:</p>
<p><span class="math-container">$$S = - \dfrac{-(2m - 1)}{1} = 2m - 1$$</span></p>
<p>We need <span class="math-container">$S \ge 0$</span>, so that means <span class="math-container">$m \ge \dfrac{1}{2}$</span> <span class="math-container">$(1)$</span></p>
<p><span class="math-container">$$P = \dfrac{4m - 5 }{1} = 4m - 5$$</span></p>
<p>We need <span class="math-container">$P \ge 0$</span>, so that means <span class="math-container">$m \ge \dfrac{5}{4}$</span> <span class="math-container">$(2)$</span></p>
<p>Intersecting <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span> we get the final answer:</p>
<p><span class="math-container">$$ m \in \bigg [ \dfrac{5}{4}, \infty \bigg )$$</span></p>
<p>My question is: Is this correct? Is my reasoning sound? Is there another way (maybe even a better way!) to solve this?</p>
| user289143 | 289,143 | <p>You need also to consider <span class="math-container">$\Delta$</span> to be positive in order for the solutions to be real.</p>
<p><span class="math-container">$\Delta = (2m-1)^2-4(4m-5)=4m^2-4m+1-16m+20=4m^2-20m+21$</span> </p>
<p><span class="math-container">$m_{1,2}=\frac{10 \pm \sqrt{100-84}}{4}=\frac{10 \pm 4}{4}=\{\frac{3}{2},\frac{7}{2} \}$</span></p>
<p>Thus <span class="math-container">$\Delta \geq 0 \Leftrightarrow m \in (-\infty, \frac{3}{2}] \cup [\frac{7}{2},\infty)$</span></p>
<p>Take for example <span class="math-container">$m=2$</span>: now <span class="math-container">$m \geq \frac{5}{4}$</span> but the equation </p>
<p><span class="math-container">$$
u^2-(2 \cdot 2 -1)u+4\cdot 2-5=0 \\
u^2-3u+3=0
$$</span>
has not real solutions <span class="math-container">$\frac{3 \pm \sqrt{-3}}{2}$</span>.</p>
<p>Hence we need <span class="math-container">$m \in [\frac{7}{2},\infty)$</span>.</p>
|
2,892,342 | <p>Given two adjacent sides and all four angles of a quadrilateral, what is the most efficient way to calculate the angle that is made between a side and the diagonal of the quadrilateral that crosses (but does not necessarily bisect) the angle in between the two known sides?</p>
<p>Other known information:</p>
<ul>
<li>The two angles that touch one and only one known side are right angles.</li>
<li>The angle that touches both known sides equals $n-m$</li>
<li>The angle that doesn't touch any known sides equals $180-n+m$, which can be inferred through the above statement and the rule that states that the interior angles of a quadrilateral must add up to $360$, although this is also known from other aspects of the broader problem</li>
<li>$n$ and $m$ cannot easily be explained with words. See the picture at the end of this post.</li>
</ul>
<p>From what I can tell, the most efficient solution to this problem is to solve for the OTHER diagonal using the law of cosines and the two known sides $x$ and $y$ from the sketch, use the law of sines and/or cosines to solve for the parts of angles $A$ and $C$ that make up the left-most triangle made by the other diagonal, find the other parts of angles $A$ and $C$ by using $90-A$ and $90-C$, respectively, since $A$ and $C$ are both right angles, then use the law of sines once more to find sides $AB$ and $BC$, and FINALLY use the law of sines to find any of the four thetas. Seems tedious. Am I missing something?</p>
<p>Here is an awful sketch I made of the problem:
<a href="https://i.stack.imgur.com/hfkoG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hfkoG.png" alt="Here is an awful sketch I made of the problem"></a></p>
| Exodd | 161,426 | <p>Well, your quadrilateral can be inscribed inside a circle, since the sum of opposite angles is 180, so the angle $\theta_1$ is equal to the angle $ACD$, that can be computed through 1 cosine rule and 1 sine rule.</p>
<p>and don't call your sketch awful. It's quite pretty ;)</p>
|
1,282,843 | <p>I'm having trouble proving the following statement:</p>
<blockquote>
<p>$x(u, v) = (u − u^ 3/ 3
+ uv^2 , v − v^ 3/ 3
+ u^ 2 v, u^2 − v^ 2 )$ is a minimal surface and x is not injective</p>
</blockquote>
<p>Proving that $x(u,v)$, which is also known as the Enneper surface, is minimal is not a problem. However, I can't prove that $x$ is not injective.</p>
<p>Is there any smart way to do this rather than trying some couples $ (a,b)$ and (c,d) hoping that $x(a,b)$ will be equal to $x(c,d)$ with $(a,b)$ and $(c,d) $ different couples?</p>
| Badshah Khan | 698,917 | <p>Just use the counter example... i.e.
<span class="math-container">$$x( \sqrt3 , 0) = (0,0,3)= x(- \sqrt3 , 0)$$</span>
With
<span class="math-container">$$( \sqrt3 , 0) \ne (- \sqrt3 , 0)$$</span></p>
|
3,271,675 | <p>Let <span class="math-container">$p$</span> be a prime of the form <span class="math-container">$p = a^2 + b^2$</span> with <span class="math-container">$a,b \in \mathbb{Z}$</span> and <span class="math-container">$a$</span> an odd prime. Prove that <span class="math-container">$(a/p) =1$</span></p>
<p>Could anyone give me a hint for the solution please?</p>
| Mark Bennet | 2,906 | <p>To explain further, we have that <span class="math-container">$p\equiv b^2 \bmod a$</span> from the given equation.</p>
<p>Quadratic reciprocity tells us that if <span class="math-container">$a$</span> and <span class="math-container">$p$</span> are odd primes and either leaves remainder <span class="math-container">$1$</span> on division by <span class="math-container">$4$</span> we have <span class="math-container">$p$</span> is a square <span class="math-container">$\bmod a$</span> if and only if <span class="math-container">$a$</span> is a square <span class="math-container">$\bmod p$</span>. (and if both leave remainder <span class="math-container">$3$</span> modulo <span class="math-container">$4$</span> then precisely one of the primes is a square modulo the other).</p>
<p>Legendre Symbols are a convenient way of writing this - a notation - but it is important to understand what they mean. The fact that <span class="math-container">$a$</span> is an odd prime tells us that <span class="math-container">$p\gt 2$</span> is odd, and the fact that <span class="math-container">$p$</span> is the sum of two squares tells us that <span class="math-container">$p\equiv 1 \bmod 4$</span>.</p>
<hr>
<p>For the first part we have <span class="math-container">$p=a^2+b^2$</span>. Take this modulo <span class="math-container">$a$</span> and it gives <span class="math-container">$p\equiv b^2\bmod a$</span>. That is simply what modulo <span class="math-container">$a$</span> means.</p>
|
1,714,902 | <p>(Question edited to shorten and clarify it, see the history for the original)</p>
<p>Suppose we are given two $n\times n$ matrices $A$ and $B$. I am interested in finding the closest matrix to $B$ that can be achieved by multiplying $A$ with orthogonal matrices. To be precise, the problem is</p>
<p>$$\begin{align}
\min_{U,V}\ & \|UAV^T-B\|_F \\
\text{s.t.}\ & U^TU = I \\
& V^TV = I,
\end{align}$$
where $\|\cdot\|_F$ is the Frobenius norm.</p>
<p>Without loss of generality*, we can restrict our attention to <em>diagonal</em> matrices with nonnegative diagonal entries $a_1,a_2,\ldots,a_n$ and $b_1,b_2,\ldots,b_n$. My hypothesis is that in this case the optimal $UAV^T$ is still diagonal, with its entries being the permutation of $a_i$ which minimizes $\sum_i (a_{\pi_i} - b_i)^2$. In other words, $U=V=P$, where $P$ is the permutation matrix corresponding to said permutation $\pi$. This appears to be true based on numerical tests, but I don't know how to prove it. Is there an elegant proof?</p>
<hr>
<p>*For arbitrary $A$ and $B$, take their singular value decompositions $A=U_A\Sigma_AV_A^T$ and $B=U_B\Sigma_BV_B^T$ to obtain
$$\begin{align}
\|UAV^T-B\|_F &= \|UU_A\Sigma_AV_AV^T-U_B\Sigma_BV_B^T\|_F \\
&= \|U'\Sigma_AV'^T-\Sigma_B\|_F,
\end{align}$$
where $U'=U_B^{-1}UU_A$ and $V'=V_B^{-1}VV_A$ are orthogonal. So we can work with $\Sigma_A$ and $\Sigma_B$ instead.</p>
| Community | -1 | <p>We may assume that $A,B$ are non-negative diagonal. We calculate the extrema of the function $f:(U,V)\in O(n)^2\rightarrow tr((UAV^T-B)(VAU^T-B))$.</p>
<p>Note that $H_1$ is in the tangent space to $O(n)$ in $U$ iff $U^TH_1\in SK$ (it is skew), that is $H_1=UH$ where $H\in SK$.</p>
<p>Then $\dfrac{\partial f}{\partial U}(U,V):H\in SK\rightarrow 2tr(AV^T(VAU^T-B)UH)=0$ for all $H$. Then $AV^T(VAU^T-B)U=A(A-V^TBU)$ is symmetric.</p>
<p>In the same way, $\dfrac{\partial f}{\partial V}(U,V):K\in SK\rightarrow 2tr(AU^T(UAV^T-B)VK)=0$ for all $K$. Then $AU^T(UAV^T-B)V=A(A-U^TBV)$ is symmetric, that is $(A-V^TBU)A$ is symmetric.</p>
<p>(HYP). Assume that the singular values of $A$ are distinct and that the singular values of $B$ too.</p>
<p>Let $Z=A-V^TBU$. One has $AZ=Z^TA,ZA=AZ^T$. Then $Z$ is diagonal, that is $D=V^TBU$ is diagonal. Then $D^2=V^TB^2V=U^TB^2U$; it is easy to see that $U=V$ and are quasi-permutations (the entries are $\pm 1$). Thus the minimum can be obtained for $U=V$, a permutation; moreover, this choice is unique.</p>
<p>Now, if we don't assume(HYP), then there is one infinity of couples $(U,V)$ that reach the minimum. Yet, by a continuity reasoning, we can prove that there is a permutation $U=V$ that reaches the minimum. Indeed, the required minimum is a continuous function of $A,B$.</p>
|
3,426,441 | <p>I'm working through some notes and trying to understand a piece of the following statement:</p>
<p>Suppose that the bivariate random variable <span class="math-container">$(X,Y)$</span> is uniformly distributed on the square <span class="math-container">$[0,1]^2$</span>, that is the joint probability distribution function of (X,Y) is given by <span class="math-container">$$f_{X,Y}(x,y)=\begin{cases} 1 \space\text{if}\,0\lt x\lt 1 \text{ and} \space 0\lt y \lt 1\\
0 \space \text{otherwise} \end{cases}$$</span>.</p>
<p>What exactly is the function saying exactly?
Shouldn't it be assigning probabilities to <span class="math-container">$(x,y)$</span> pairs as it's a pdf ?
Is it saying that the probability that an (x,y) pair lying in the square is 1 if <span class="math-container">$0\lt x\lt 1$</span> and <span class="math-container">$0\lt y\lt 1 $</span> ?</p>
<p>I get the feeling that my understanding of density functions needs a bit of reinforcing....</p>
| Charith | 321,851 | <p>Will it be sufficient to write the function as </p>
<p><span class="math-container">$f:A\rightarrow B$</span> such that<br>
<span class="math-container">$f(x)=ln(x)$</span>
? </p>
|
3,426,441 | <p>I'm working through some notes and trying to understand a piece of the following statement:</p>
<p>Suppose that the bivariate random variable <span class="math-container">$(X,Y)$</span> is uniformly distributed on the square <span class="math-container">$[0,1]^2$</span>, that is the joint probability distribution function of (X,Y) is given by <span class="math-container">$$f_{X,Y}(x,y)=\begin{cases} 1 \space\text{if}\,0\lt x\lt 1 \text{ and} \space 0\lt y \lt 1\\
0 \space \text{otherwise} \end{cases}$$</span>.</p>
<p>What exactly is the function saying exactly?
Shouldn't it be assigning probabilities to <span class="math-container">$(x,y)$</span> pairs as it's a pdf ?
Is it saying that the probability that an (x,y) pair lying in the square is 1 if <span class="math-container">$0\lt x\lt 1$</span> and <span class="math-container">$0\lt y\lt 1 $</span> ?</p>
<p>I get the feeling that my understanding of density functions needs a bit of reinforcing....</p>
| William Elliot | 426,203 | <p>Yes, a function can be extended to subsets of the domain.<br>
That is called the set extension of the function.<br>
If A is a subset of the domain of a function f,<br>
f(A) or for careful notation, f[A] = { f(x) : x in A }.<br>
The inverse set extension is f<span class="math-container">$^{-1}$</span>(B) or<br>
f<span class="math-container">$^{-1}$</span>[B] = { x : f(x) in B }<br>
which is commonly used in the topological definition of continuity. </p>
<p>For your example, B = ln[A].</p>
|
4,159,771 | <p>I understand the geometric intuition behind determinants but what is the real life use of it? I'm not looking for answers along the lines of "it helps to find solutions to linear systems" etc, unless this is one of those concepts that is useful because it allows us to do "more math". I'm more interested in knowing practical applications of determinants in science, engineering, computer graphics etc.</p>
| Vercassivelaunos | 803,179 | <p>Area integrals and volume integrals over complicated areas or volumes (like spheres, balls, ellipses, hyperboloids, etc.) can be calculated using the transformation theorem, which is a generalization of integration by substitution. It uses the determinant of the differential/Jacobian of the coordinate transformation.</p>
<p>Examples for its use: wherever we calculate higher dimensional integrals, so essentially any and all fields of physics and engineering. Electrodynamics profiting especially, since its fundamental equations (Maxwell's equations) involve precisely such integrals.</p>
|
3,121,361 | <p>Given <span class="math-container">$G$</span> has elements in the interval <span class="math-container">$(-c, c)$</span>. Group operation is defined as:
<span class="math-container">$$x\cdot y = \frac{x + y}{1 + \frac{xy}{c^2}}$$</span></p>
<p>How to prove closure property to prove that G is a group?</p>
| Arthur | 15,500 | <p>Hint: show that <span class="math-container">$a_n-1\leq 1+\frac12+\frac14+\cdots+\frac1{2^{n-1}}$</span></p>
|
3,121,361 | <p>Given <span class="math-container">$G$</span> has elements in the interval <span class="math-container">$(-c, c)$</span>. Group operation is defined as:
<span class="math-container">$$x\cdot y = \frac{x + y}{1 + \frac{xy}{c^2}}$$</span></p>
<p>How to prove closure property to prove that G is a group?</p>
| Michael Rozenberg | 190,319 | <p>By the binomial theorem <span class="math-container">$$1<a_n=1+1+\frac{1}{2!}\left(1-\frac{1}{n}\right)+\frac{1}{3!}\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)+...+\frac{1}{n!}\left(1-\frac{1}{n}\right)...<$$</span>
<span class="math-container">$$<2+\frac{1}{2!}+\frac{1}{3!}+...+\frac{1}{n!}<2+\frac{1}{2}+\frac{1}{2^2}+...+\frac{1}{2^{n-1}}<...$$</span>
Can you end it now?</p>
|
2,555,499 | <p>Let $v_1=(1,1)$ and $v_2=(-1,1)$ vectors in $\mathbb{R}^2$. They are <strong>clearly linearly independent</strong> since each is not an scalar multiple of the other. The following information about a linear transformation $f: \mathbb{R}^2 \to \mathbb{R}^2$ is given: $$f(v_1)=10 \cdot v_1 \text{ and } f(v_2)=4 \cdot v_2$$</p>
<ol>
<li><p><strong>Give the transformation matrix $_vF_v$ with respect to ordered basis $\mathcal{B}=(v_1,v_2)$</strong></p></li>
<li><p><strong>Give the transformation matrix $_eF_e$ with respect to the ordered standard basis $e=(e_1,e_2)$ of $\mathbb{R}^2$</strong></p></li>
</ol>
<p>Recall that
$$ \begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}^{-1}=\begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\ -\frac{1}{2} & \frac{1}{2} \end{bmatrix} $$
We need a matrix $_eF_e$ such that:
$$_eF_e\begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}=\begin{bmatrix} 10 & -4 \\ 10 & 4 \end{bmatrix}=\begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}\begin{bmatrix} 10 & 0 \\ 0 & 4 \end{bmatrix}$$
then
$$_eF_e=\begin{bmatrix} 10 & -4 \\ 10 & 4 \end{bmatrix}\begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}^{-1}
=\begin{bmatrix} 10 & -4 \\ 10 & 4 \end{bmatrix}\begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\ -\frac{1}{2} & \frac{1}{2} \end{bmatrix}=\begin{bmatrix} 7 & 3 \\ 3 & 7 \end{bmatrix}$$
Okay so I'm pretty sure that $$_eF_e=_eF_v \cdot _vF_v \cdot _vF_e$$
And i figured I could find $_eF_e=\begin{bmatrix} ? & ? \\ ? & ? \end{bmatrix}$ in the following equation $$\begin{bmatrix} ? & ? \\ ? & ? \end{bmatrix} \text{ } \begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}= \begin{bmatrix} 10 & -4 \\ 10 & 4 \end{bmatrix} \\ \Rightarrow
\begin{bmatrix} 10 & -4 \\ 10 & 4 \end{bmatrix} \text{ } \begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\ -\frac{1}{2} & \frac{1}{2} \end{bmatrix}= \begin{bmatrix} 7 & 3 \\ 3 & 7 \end{bmatrix} \\ \Rightarrow {}_eF_e=\begin{bmatrix} 7 & 3 \\ 3 & 7 \end{bmatrix} $$</p>
<p>Now, how can I find ${}_v{F}_v$? I got a feeling that I'm making it more difficult than necessary</p>
| user21820 | 21,820 | <p>This is really a matter of convention, which varies over time and from place to place, but in modern mathematics precedence is generally as follows (from highest to lowest, and with those of the same precedence put on the same line) where "$∙$" denotes the positions of the subexpressions:</p>
<ul>
<li><p>Brackets:   $(∙)$</p></li>
<li><p>Function application:   $∙(∙)$</p></li>
<li><p>Exponentiation:   $∙^∙$</p></li>
<li><p>Juxtaposition:   $∙∙$</p></li>
<li><p>Fractions:   $\dfrac∙∙$</p></li>
<li><p>Product/Summation:   $\prod_∙^∙ ∙$   $\sum_∙^∙ ∙$</p></li>
<li><p>Multiplicative operations:   $∙ \times ∙$   $∙ \div ∙$</p></li>
<li><p>Negation:   $-∙$.</p></li>
<li><p>Additive operations:   $∙+∙$   $∙-∙$</p></li>
</ul>
<p>I can easily justify that juxtaposition is given higher precedence in modern mathematics. Consider that "$\prod_{k=1}^n a_k b_k \times \sum_{m=1}^n c_m d_m$" is interpreted as "$( \prod_{k=1}^n ( a_k b_k ) ) \times ( \sum_{m=1}^n ( c_m d_m ) )$" and <strong>not</strong> as "$\prod_{k=1}^n ( a_k b_k \times \sum_{m=1}^n c_m d_m )$"!</p>
|
3,100,831 | <p>What is the domain of <span class="math-container">$g(x)=\frac{1}{1-\tan x}$</span> </p>
<p>I tried it and got this. But I'm not really sure if it is right. Is that gonna be like this ? <span class="math-container">$(\mathbb{R}, \frac{\pi}{4})$</span></p>
| Kavi Rama Murthy | 142,385 | <p>It is <span class="math-container">$\mathbb R \setminus \cup_{\{n \in \mathbb Z\} }(\{n\pi +\pi /2\} \cup \{n\pi +\pi /4\})$</span></p>
|
206,780 | <p>Let $f:X\to Y$ is a measurable function. Banach indicatrix
$$
N(y,f) = \#\{x\in X \mid f(x) = y\}
$$
is the number of the pre-images of $y$ under $f$. If there are infinitely many pre-images then $N(y,f) = \infty$. </p>
<p>Let $X\subset\mathbb R^n$, $Y\subset\mathbb R^m$ with Lebesgue measure.</p>
<p><em>I am interested to know if $N(y,f)$ is a measurable function (?)</em> </p>
<ul>
<li>If $X$ is an interval (say $X=[a,b]$) and $f$ is a continuous function, the answer is some how positive (<a href="https://math.stackexchange.com/q/68635/23566">https://math.stackexchange.com/q/68635/23566</a>).</li>
<li>In Federer's Geometric measure theory we find following theorem </li>
</ul>
<blockquote>
<p>Let $X$ be a separable metric space and let $f(A)$ be $\mu$-measurable for all Borel subsets $A$ of $X$.
Let $\zeta(S) = \mu(f(S))$ for $S\subset X$ and let $\psi$ be the measure on $X$ defined by the Carathéodory construction from $\zeta$. Then
$$
\psi(A) = \int\limits_{A}N(y,f)\, d\mu_{Y}
$$
for every Borel set $A\subset X$.</p>
</blockquote>
<p><em>Does it say anything about measurability of $N(y,f)$ ?</em> </p>
| Nikita Evseev | 15,946 | <p>This is an attempt to rid of demand of continuity.
The following proof is essentially an adaptation of Banach's original proof in case of continuous function defined on segment <span class="math-container">$[a,b]$</span>. See also <a href="https://math.stackexchange.com/a/144832/23566">https://math.stackexchange.com/a/144832/23566</a>.</p>
<blockquote>
<p><strong>Lemma</strong> Let <span class="math-container">$f\colon \mathbb R^n \to \mathbb R^m$</span> be a measurable function and
an image <span class="math-container">$f(B)$</span> is measurable for any Borel measurable set <span class="math-container">$B$</span>.
Then the Banach Indicatrix <span class="math-container">$N(y,f)$</span> is measurable.</p>
</blockquote>
<p><strong>Proof.</strong> We use a dyadic decomposition.
For each integer <span class="math-container">$k\geq 0$</span> consider collection of cubes <span class="math-container">$\{P^k_i\}$</span> of the form
<span class="math-container">$$
P^k_i = (a_1^i\cdot2^{-k},(a_1^i+1)\cdot2^{-k}]\times\cdots\times(a_n^i\cdot2^{-k},(a^i_{n+1})\cdot2^{-k}],
$$</span>
where the <span class="math-container">$a^i_j$</span> are all integers.
The properties we need are following: cubes <span class="math-container">$P^k_i$</span> are disjoint; <span class="math-container">$\mathbb R^n = \bigcup\limits_{i=1}^{\infty}P^k_i$</span>; <span class="math-container">$\operatorname{diam} P^k_i = \sqrt{n}2^{-k}\to 0 $</span> as <span class="math-container">$k\to\infty$</span>.</p>
<p>For <span class="math-container">$y \in \mathbb R^m$</span> and <span class="math-container">$i\in \mathbb N$</span> let
<span class="math-container">$$
L_{i}^{(k)}(y) = \begin{cases} 1, & \text{if } y \in f(P_{i}^{(k)}), \\ 0, & \text{if } y \not\in f(P_{i}^{(k)}).
\end{cases}
$$</span>
The functions <span class="math-container">$L_{i}^{(k)}(y)$</span> are non-negative and measurable because the set <span class="math-container">$f(P_{i}^{(k)})$</span> is measurable.
Therefore the sum
<span class="math-container">$$
N_k(y) =\sum\limits_{i=1}^{\infty}L_{i}^{(k)}(y)
$$</span>
is also measurable.
Thus, the sequence <span class="math-container">$(N_k)_{k=1}^\infty$</span> of measurable functions is increasing and therefore
the pointwise limit
<span class="math-container">$$
N^*(y) = \lim_{k\to\infty} N_k(y)
$$</span>
exists and is a measurable function of <span class="math-container">$y$</span>.</p>
<p>Note that <span class="math-container">$N_k(y)$</span> simply counts on how many of the cubes <span class="math-container">$P_{i}^{(k)}$</span> the function <span class="math-container">$f$</span> attains the value <span class="math-container">$y$</span> at least once.
Thus <span class="math-container">$N(y,f) \geq N_k(y)$</span> for all <span class="math-container">$k$</span>, so <span class="math-container">$N(y) \geq N^*(y)$</span>.</p>
<p>Let us argue that <span class="math-container">$N^*(y) \geq N(y,f)$</span>.
Let <span class="math-container">$q$</span> be an integer such that <span class="math-container">$N(y,f) \geq q$</span>.
Then there exist <span class="math-container">$q$</span> different points <span class="math-container">$x_1,\dots,x_q$</span> such that <span class="math-container">$f(x_j) = y$</span>.
If <span class="math-container">$k$</span> is so large that points <span class="math-container">$x_1,\dots,x_q$</span> are in separated cubes <span class="math-container">$\{P_{i_j}^{(k)}\}_1^q$</span>
then <span class="math-container">$N_k(y) \geq q$</span>.
This shows <span class="math-container">$N^*(y) \geq N(y,f)$</span> and thus <span class="math-container">$N^*(y) = N(y,f)$</span>, establishing measurability.</p>
<hr />
<p><strong>EDIT</strong></p>
<p>After a while I came to the following</p>
<blockquote>
<p><strong>Theorem</strong> Let <span class="math-container">$f:X\to Y$</span> be a <span class="math-container">$\mu_X$</span>-measurable mapping, and <span class="math-container">$A\subset X$</span> be a Borel set.
Then <span class="math-container">$f$</span> can be redefined on a set of <span class="math-container">$\mu_X$</span>-measure zero in such a way that
the Banach indicatrix <span class="math-container">$N(y,f,A)$</span> is a <span class="math-container">$\mu_Y$</span>-measurable function.</p>
</blockquote>
<p>Partly this was known though I've wrote a proof <a href="http://arxiv.org/abs/1508.02902" rel="nofollow noreferrer">here</a>.</p>
|
2,418,954 | <p>Using Vieta's formulas, I can get $$\begin{align} \frac{1}{x_1^3} + \frac{1}{x_2^3} + \frac{1}{x_3^3} &= \frac{x_1^3x_2^3 + x_1^3x_3^3 + x_2^3x_3^3}{x_1^3x_2^3x_3^3} \\&= \frac{x_1^3x_2^3 + x_1^3x_3^3 + x_2^3x_3^3}{x_1^3x_2^3x_3^3} \\ &= \frac{x_1^3x_2^3 + x_1^3x_3^3 + x_2^3x_3^3}{\left (-\frac{d}{a} \right)^3}\end{align}$$
But then I don't know how to substitute the numerator.</p>
| shrimpabcdefg | 473,212 | <p>If $x \neq 0$ is a solution to $at^3+bt^2+ct+d=0$ then since $a+b(\frac{1}{x})+c(\frac{1}{x})^2+d(\frac{1}{x})^3=0$, $\frac{1}{x}$ is a solution to $dt^3+ct^2+bt+a=0$. Thus if we have $\frac{1}{x_i}=y_i$ for $i=1,2,3$, </p>
<p>\begin{align*}
\frac{1}{x_1^3}+\frac{1}{x_2^3}+\frac{1}{x_3^3}&=y_1^3+y_2^3+y_3^3\\
&=(y_1+y_2+y_3)\left[(y_1+y_2+y_3)^2-3(y_1y_2+y_2y_3+y_3y_1)\right]+3y_1y_2y_3\\
&=(-\frac{c}{d})\left[(-\frac{c}{d})^2-3\cdot\frac{b}{d}\right]-3\cdot\frac{a}{d}\\
&=\frac{-c^3+3bcd-3ad^2}{d^3}.
\end{align*}</p>
|
2,979,315 | <p>Let <span class="math-container">$X$</span> be a continuous random variable with uniform distribution between <span class="math-container">$0$</span> and <span class="math-container">$1$</span>. Compute the distribution of <span class="math-container">$Y = \sin(2\pi X)$</span>.</p>
<p><span class="math-container">$\sin(2\pi \cdot0)$</span> and <span class="math-container">$\sin(2\pi \cdot1) =0$</span>. So, the inverse image of the function has multiple roots. How can I find the PDF of <span class="math-container">$Y$</span> then?</p>
| Servaes | 30,382 | <p><strong>Hint:</strong> The composition of two reflections is a rotation.</p>
|
799,183 | <p>I'm trying to work out what the transformation $T:z \rightarrow -\frac{1}{z}$ does (eg reflection in a line, rotation around a point etc). Any help on how to do this would be greatly appreciated! I've tried seeing what it does to $1$ and $i$ but is hasn't helped me. Thanks!</p>
| user21820 | 21,820 | <p>What does negation do? Then what does taking the reciprocal do to the length and the angle? Note that you will need to know what <a href="http://en.wikipedia.org/wiki/Circle_inversion" rel="nofollow">inversion</a> is, to describe what happens to the length.</p>
|
2,261,500 | <p>I try to prove that statement using only Bachet-Bézout theorem (I know that it's not the best technique). So I get $k$ useful equations with $n_1$ then $(k-1)$ useful equations with $n_2$ ... then $1$ useful equation with $n_{k-1}$. I multiply all these equations to obtain $1$ for one side. For the other side I'm lost (there are too many terms) but I want to make appear the form $n_1 L_1+...+n_k L_k$.</p>
<p>Supposing the existence of all the integers we need :</p>
<p>$\underbrace{(a_1n_1+a_2n_2)(a_1n_1+a_3n_3)...(a_1n_1+a_kn_k)}_{\textit{k equations}} \underbrace{(b_2n_2+b_3n_3)...(b_2n_2+b_kn_k)}_{\textit{(k-1) equations}}...\underbrace{(\mu_{k-1} n_{k_1}+\mu_{k} n_k)}_{\textit{1 equation}}=1$</p>
<p>Maybe we can reduce the number of useful equations or start an induction to identify a better form for the product.</p>
<p>Thanks in advance !</p>
| Lazy Lee | 430,040 | <p>According to the Bachet-Bezout Lemma, the gcd of two integers $gcd(a,b)=d$ is the smallest positive integer that can be written as $ax+by=d$ for some integers $x,y$. We split the discussion into two parts:</p>
<p><strong>If $k$ is even</strong>: Then there exists integers $\{x_i\}$ such that $$n_1x_1+n_2x_2=1$$$$n_3x_3+n_4x_4=1$$$$...$$$$n_{k-1}x_{k-1}+n_kx_k=1$$ Just multiply the first equation by $\frac{k}{2}$ and subtract all the rest to get $$\frac{k}{2}\cdot n_1+\frac{k}{2}\cdot n_2+\sum_{i=3}^k (-x_i)\cdot n_i=1 \implies gcd(n_1,n_2...,n_k)=1$$</p>
<p><strong>If $k$ is odd</strong>: Arrange $\{n_i\}$ such that $n_1\neq 1$ (this is possible, otherwise all $n_i=1$ and there's nothing to prove). We prove that there exists $\{x_i\}$ such that $$n_1x_1+n_2x_2 + n_3x_3=1$$$$n_4x_4+n_5x_5=1$$$$...$$$$n_{k-1}x_{k-1}+n_kx_k=1$$ The second to last equations exist for some $x$ by Bachet-Bezout, while the first can be achieved with $(x'_2,x'_3)$ such that $$n_2x'_2+n_3x'_3=1$$ and setting $(x_1,x_2,x_3)=(1,(-n_1+1)x'_2,(-n_1+1)x'_3)$. Hence, similar to above, we multiply the first equation by $\frac{k-1}{2}$ then subtract the rest to get $$\frac{k-1}{2}\cdot x_1\cdot n_1+\frac{k-1}{2}\cdot x_2\cdot n_2+\frac{k-1}{2}\cdot x_3\cdot n_3+\sum_{i=r}^k (-x_i)\cdot n_i=1 \implies gcd(n_1,n_2...,n_k)=1$$</p>
|
3,368,402 | <p>I am utilizing set identities to prove (A-C)-(B-C).</p>
<p><span class="math-container">$\begin{array}{|l}(A−B)− C = \{ x | x \in ((x\in (A \cap \bar{B})) \cap \bar{C}\} \quad \text{Def. of Set Minus}
\\
\quad \quad \quad \quad \quad =\{ x | ((x\in A) \wedge (x\in\bar{B})) \wedge (x\in\bar{C})\} \quad \text{Def. of intersection}
\\ \quad \quad \quad \quad \quad =\{ x | (A\wedge\overline{C}\wedge\overline{B})\vee(\overline{C}\wedge\overline{B}\wedge C)\} \quad \text{Association Law}
\\
\quad \quad \quad \quad \quad =\{ x | ((x\in A) \wedge (x\in\bar{C})) \wedge ((x\in \bar{B}) \wedge (x\in\bar{C}))\} \quad \text{Idempotent Law}
\\
\quad \quad \quad \quad \quad =\{ x | (((x\in (A\cap\bar{C})) \cap (x\in (\bar{B} \cap\bar{C})))\} \quad \text{Def. of union}
\\
\quad \quad \quad \quad \quad =\{ x | (((x\in (A\cap \bar{C})) \cap \overline{(x\in (B\cup C)))} \} \quad \text{DeMorgan's Law}
\\
\quad \quad \quad \quad \quad =\{ x | x \in (A - C) - (B \cup C) \} \quad \text{Def. Set Minus}
\\
=(A-C)-(B-C)
\end{array}$</span></p>
<p>So it looks like I screwed up on the final step. Is there something that I am forgetting to do properly or where am I supposed to go from that final step? </p>
| J.G. | 56,861 | <p>Abbreviating and as <span class="math-container">$\land$</span> and not as <span class="math-container">$\lnot$</span>,</p>
<p><span class="math-container">$$x\in(A-B)-C\iff x\in A-B\land x\notin C \iff x\in A\land x\notin B\land x\notin C\\\iff x\in A\land x\notin C\land\lnot(x\in B\land x\notin C)\iff x\in A-C\land x\notin B-C\\\iff x\in (A-C)-(B-C).$$</span>The third <span class="math-container">$\iff$</span> uses <span class="math-container">$$p\land\lnot q\land\lnot r\iff(p\land\lnot r)\land\lnot (q\land\lnot r)$$</span>(you can verify this is a tautology), while the rest use the definition of <span class="math-container">$-$</span>.</p>
|
737,692 | <p>I'm working on this question:</p>
<blockquote>
<p>Rewrite the following summation using sigma notation and then compute it
using the technique of telescoping summation.
$$\frac{1}{2*5}+\frac{1}{3*6}+\frac{1}{4*7}+...+\frac{1}{(n-2)(n+1)}+\frac{1}{(n-1)(n+2)} $$</p>
</blockquote>
<p>My work:
I replaced the $n's$ for $i's$ and added the Sigma to it $$\sum_{i=1}^n \frac{1}{(i-2)(i+1)}+\frac{1}{(i-1)(i+2)} $$
would it be wrong to replace every $i$ with $\frac{n(n+1)}{2}$? And then simplify the resulting equation. </p>
| izœc | 83,639 | <p><strong>HINT:</strong> Your sigma expression is not correct (you can see this, as by plugging in $i=0$ the second term in the expression is undefined). Consider that in the sum
$$
\frac{1}{2*5}+\frac{1}{3*6}+\frac{1}{4*7}+...+\frac{1}{(n-2)(n+1)}+\frac{1}{(n-1)(n+2)}
$$
every term is of the form $\frac{1}{(n-1)(n+2)}$ for some $n$. This should let you figure out what the initial index value should be in the sum [Hint: Compare $\frac{1}{2 \cdot 5} = \frac{1}{(i-1)(i+2)}$].</p>
<p>Since I noticed in the comments below that the poster is still unsure, you should expect to arrive at the sum </p>
<p>$$
\frac{1}{2*5} + \cdots + \frac{1}{(n-1)(n+2)}
=\sum_{i = 3} ^n \frac{1}{(i-1)(i+2)}
$$</p>
<p>From this, similar to the method in @lab you can write this in a clearly telescoping form, using that $\frac{1}{i-1} - \frac{1}{i+2} = \frac{i+2 - (i-1)}{(i-1)(i+2)} = 3 \frac{1}{(i-1)(i+2)}$. Hence, </p>
<p>$$
\sum_{i = 3} ^n \frac{1}{(i-1)(i+2)} = \sum_{i = 3} ^n \frac{1}{3} \left( \frac{1}{i-1} - \frac{1}{i+2} \right) = \frac{1}{3} \sum_{i = 3} ^n \left( \frac{1}{i-1} - \frac{1}{i+2} \right).
$$
Perhaps you can take it from here?</p>
|
898,495 | <p>A standard pack of 52 cards with 4 suits (each having 13 denominations) is well shuffled and dealt out to 4 players (N, S, E and W).</p>
<p>They each receive 13 cards.</p>
<p>If N and S have exactly 10 cards of a specified suit between them. </p>
<p>What is the probability that the 3 remaining cards of the suit are in one player's hand (either E or W)? Can you please just help me understand how to solve this conditional probability question?</p>
| user2566092 | 87,313 | <p>When you condition, you get that you have 26 cards left and 3 of them are of the particular suit. There are $26 \choose 13$ ways of assigning these remaining 26 cards among E and W (because once you assign 13 cards to E, the remaining 13 cards automatically go to W. You get that one player has all 3 cards if either E has 3 or E has 0. The number of ways E could have all 3 is $23 \choose 10$. The number of ways E could have 0 is $23 \choose 13$ which is also $23 \choose 10$. So $2 {23 \choose 10}/{26 \choose 13}$ is the answer.</p>
|
372,548 | <p><span class="math-container">$f:\mathbb R\to\mathbb R$</span> is a convex continuous function. We have a finite or a countable set of triples: <span class="math-container">$\{(x_n,f(x_n),D_n)\}_{n\in N}$</span>, where <span class="math-container">$D_n$</span> is the slope of a tangent line <span class="math-container">$L_n$</span> at <span class="math-container">$x_n$</span> (if at a point <span class="math-container">$f$</span> is not differentiable, then multiple lines can be tangents; <span class="math-container">$L_n$</span> is just one of those lines).</p>
<p>Assuming that, for any <span class="math-container">$n,m,k$</span>, the intersection of <span class="math-container">$L_n$</span> and <span class="math-container">$L_m$</span> cannot be the point <span class="math-container">$(x_k, f(x_k))$</span>, then we want to prove that there exists a smooth function <span class="math-container">$g$</span> such that <span class="math-container">$g(x_n)=f(x_n)$</span> and <span class="math-container">$g'(x_n)=D_n$</span> for any <span class="math-container">$n$</span>.</p>
<hr />
<p>The original problem that I am trying to solve involves multi-dimensional manifolds, but I think it is easy to generalize the 2-dimensional case.</p>
<p>By the mollification theorem, a smooth function approximating <span class="math-container">$f$</span> must exist, but can it contain a set of points that corresponds precisely to the points on the graph of <span class="math-container">$f$</span>?</p>
| Rajesh D | 14,414 | <p>At zero time, assuming the bandwidth is <span class="math-container">$B$</span>, at next time instance, due to the appearance of the term <span class="math-container">$u_xu$</span>, the band width of <span class="math-container">$u_x$</span> also being <span class="math-container">$B$</span> at time step <span class="math-container">$0$</span>, the bandwidth of the solution <span class="math-container">$u$</span> becomes <span class="math-container">$2B$</span>(multiplication in spatial domain is convolution in frequency domain). This goes on...although this is a very crude argument, it shows that the solution may not be band limited.</p>
|
1,218,238 | <p>Describe explicitly a subgroup $H$ of order 8 of the permutation group $S_5$.</p>
<p>How could I find such a subgroup? I don't know how to start with. Should I start with some transition $(i,j)$ and use them to generate a subgroup?</p>
| Ross Millikan | 1,827 | <p>Your answer is correct unless $n=0$. In that case itself and the zero subspace are the same.</p>
|
1,209,934 | <p>So I am given two points $A=(-.5,2.3,-7.3)$ and $B=(-2,17.1,-0.3)$ and then using $AB = OB - OA$ to give me $(1.5,-14.8,-7)$. The plane is $$x+23y+13z=500$$ From there I got $r.n$ where $r=(1.5,-14.8,-7)$ and $n=(1,28,13)$. From here I do not know how to check if the vector is perpendicular to the plane.</p>
| ThunderGod763 | 781,395 | <p>We have the two points <span class="math-container">$A(-0.5,2.3,-7.3)$</span> and <span class="math-container">$B(-2,17.1,-0.3)$</span>, as well as the general equation of the plane <span class="math-container">$x+23y+13z-500=0$</span>. The vector from <span class="math-container">$A$</span> to <span class="math-container">$B$</span> can easily be found.
<span class="math-container">$$\vec{\mathbf{r}}=\langle(-2+0.5),(17.1-2.3),(-0.3+7.3)\rangle=\langle-1.5,14.8,7\rangle$$</span>
The normal vector to the plane is <span class="math-container">$\vec{\mathbf{n}}=\langle1,23,13\rangle$</span>. These two vectors are parallel if their cross product is equal to <span class="math-container">$\vec{\mathbf{0}}$</span> [<a href="http://mathworld.wolfram.com/ParallelVectors.html" rel="nofollow noreferrer">1</a>]. If <span class="math-container">$\vec{\mathbf{r}}$</span> and <span class="math-container">$\vec{\mathbf{n}}$</span> are parallel vectors, then they are both perpendicular to the plane in question.
<span class="math-container">$$\vec{\mathbf{r}}\times\vec{\mathbf{n}}=\langle(14.8)(13)-(7)(23),(7)(1)-(-1.5)(13),(-1.5)(23)-(14.8)(1)\rangle$$</span>
This comes from the fact that <span class="math-container">$\vec{\mathbf{u}}\times\vec{\mathbf{w}}=\langle u_2w_3-u_3w_2,u_3w_1-u_1w_3,u_1w_2-u_2w_1 \rangle$</span> [<a href="http://mathworld.wolfram.com/CrossProduct.html" rel="nofollow noreferrer">2</a>].
<span class="math-container">$$\vec{\mathbf{r}}\times\vec{\mathbf{n}}=\langle31.4,26.5,-49.3\rangle\ne\langle0,0,0\rangle$$</span>
Let us see if the vector <span class="math-container">$\vec{\mathbf{r}}$</span> happens to be parallel to the plane in question. If it is, then the dot product of <span class="math-container">$\vec{\mathbf{r}}$</span> and <span class="math-container">$\vec{\mathbf{n}}$</span> will be <span class="math-container">$0$</span> as the two vectors will be orthogonal.
<span class="math-container">$$\vec{\mathbf{r}}\cdot\vec{\mathbf{n}}=(-1.5)(1)+(14.8)(23)+(7)(13)=-1.5+340.4+91=429.9\ne0$$</span>
The vector from <span class="math-container">$A$</span> to <span class="math-container">$B$</span> is not perpendicular to the plane in question, nor is it parallel to it.</p>
|
1,097,579 | <p>How to solve $$\frac{dx}{2p}=\frac{dy}{2q}=\frac{du}{2(p^2+q^2}=\frac{dp}{2up}=\frac{dq}{2uq}=dt$$</p>
<p>as functions $$x=x(t), y=y(t), u=u(t), p=p(t), q=q(t)$$</p>
<p>My method is use the last three equalities to deduce $$\frac{d^2u}{dt^2}+u\frac{du}{dt}=0$$
But this nonlinearity troubles me...</p>
| math110 | 58,742 | <p>Hint: you equation is this following
$$y''+yy'=0$$
let
$$y'=p\Longrightarrow y''=\dfrac{dp}{dx}=\dfrac{dp}{dy}\cdot\dfrac{dy}{dx}=p\dfrac{dp}{dy}$$
so
$$p\dfrac{dp}{dy}+yp=0$$
so
$$p(\dfrac{dp}{dy}+y)=0\Longrightarrow p=0,\text{or},\dfrac{dp}{dy}=-y$$
if
$p=0\Longrightarrow y=C$</p>
<p>if $$\dfrac{dp}{dy}=-y\Longrightarrow p=-\dfrac{y^2}{2}+C$$
then
$$\dfrac{dy}{dt}=-\dfrac{y^2}{2}+C$$
then it is easy it</p>
|
1,097,579 | <p>How to solve $$\frac{dx}{2p}=\frac{dy}{2q}=\frac{du}{2(p^2+q^2}=\frac{dp}{2up}=\frac{dq}{2uq}=dt$$</p>
<p>as functions $$x=x(t), y=y(t), u=u(t), p=p(t), q=q(t)$$</p>
<p>My method is use the last three equalities to deduce $$\frac{d^2u}{dt^2}+u\frac{du}{dt}=0$$
But this nonlinearity troubles me...</p>
| Sidharth Ghoshal | 58,294 | <p>Based on your last line:</p>
<p>Consider $F = u^2$</p>
<p>$$ \frac{df}{dt} = 2 u \frac{du}{dt} $$</p>
<p>Thus suppose we re-arrange</p>
<p>$$ \frac{d^2u}{dt^2} + u \frac{du}{dt} = 0 $$</p>
<p>Into</p>
<p>$$ \frac{d^2u}{dt^2} = - u \frac{du}{dt} $$</p>
<p>Then we can re-write it as</p>
<p>$$ \frac{d}{dt} \left[ \frac{du}{dt} \right] = \frac{d}{dt}\left[ - \frac{1}{2}u^2\right] $$</p>
<p>Thus we have</p>
<p>$$ \frac{du}{dt} = C_1-\frac{1}{2}u^2 $$</p>
<p>From here we will use the standard algorithm for solving linear ODEs</p>
<p>$$ \frac{1}{C_1 - \frac{1}{2}u^2} du = 1 dt$$</p>
<p>Since the choice of C is arbitrary this can be re-written as</p>
<p>$$2 \frac{1}{C_1 - u^2} du = 1 dt$$</p>
<p>We can integrate both sides:</p>
<p>$$2 \int \frac{1}{C_1 - u^2} du = t + C_2$$</p>
<p>The left hand side can be attacked with Partial Fractions: We decompose the integrand into:</p>
<p>$$ \frac{1}{C_1 - u^2} = \frac{1}{(\sqrt{C_1} - u)(\sqrt{C_1} + u)} = \frac{A}{\sqrt{C_1} - u} + \frac{B}{\sqrt{C_1} + u}$$</p>
<p>We find the constants A,B</p>
<p>$$ B(\sqrt{C_1} - u) + A(\sqrt{C_1} + u) = 0 $$</p>
<p>Which (based on grouping like terms of $C_1$ and $u$ tells us</p>
<p>$$ \begin{pmatrix} Au - Bu = 0 \\ \sqrt{C_1}(A + B) = 1 \end{pmatrix} \rightarrow \begin{pmatrix} A = B \\ (A + B) = \frac{1}{\sqrt{C_1}} \end{pmatrix} $$</p>
<p>Giving us</p>
<p>$$ A = B = \frac{1}{2 \sqrt{C_1}} $$</p>
<p>So our integral is now</p>
<p>$$2 \int \left[ \frac{\frac{1}{2 \sqrt{C_1}}}{\sqrt{C_1} -u} + \frac{\frac{1}{2 \sqrt{C_1}}}{\sqrt{C_1} + u} \right] du $$</p>
<p>We factor the top parts giving us</p>
<p>$$ \frac{1}{\sqrt{C_1}} \int \left[ \frac{1}{\sqrt{C_1} -u} + \frac{1}{\sqrt{C_1} + u} \right] du $$</p>
<p>Which integrates to</p>
<p>$$ \frac{1}{\sqrt{C_1}} \left( \ln(\sqrt{C_1} + u) - \ln(\sqrt{C_1} - u) \right) $$</p>
<p>So we have</p>
<p>$$2 \int \frac{1}{C_1 - u^2} du = t + C_2$$</p>
<p>Gives rise to</p>
<p>$$ \frac{1}{\sqrt{C_1}} \left( \ln(\sqrt{C_1} + u) - \ln(\sqrt{C_1} - u) \right) = t + C_2$$</p>
<p>Which resolves to</p>
<p>$$ \ln \left( \frac{\sqrt{C_1} + u}{\sqrt{C_1} - u} \right) = C_1 t + C_2 $$</p>
<p>We exponentiate both sides to find</p>
<p>$$ \left( \frac{\sqrt{C_1} + u}{\sqrt{C_1} - u} \right) = C_2 e^{C_1t} $$</p>
<p>And now solve for $u$ as </p>
<p>$$ \sqrt{C_1} + u = (C_2 e^{C_1t})(\sqrt{C_1} - u) \rightarrow $$</p>
<p>$$\left(1 - C_2 e^{C_1t} \right)u= (C_2 e^{C_1t}-1)\sqrt{C_1} $$</p>
<p>$$ u = \frac{(C_2 e^{C_1t}-1)\sqrt{C_1}}{\left(1 - C_2 e^{C_1t} \right)} $$</p>
<h2>Alternate Trig Approach</h2>
<p>$$ \int \frac{2}{C_1 - u^2} du = \frac{2}{\sqrt{C_1}} arctanh^{-1} \left( \frac{u}{\sqrt{C_1}} \right) $$</p>
<p>So our solution then becomes</p>
<p>$$\frac{2}{\sqrt{C_1}} arctanh^{-1} \left( \frac{u}{\sqrt{C_1}} \right) = t + C_2 $$</p>
<p>$$ arctanh^{-1} \left( \frac{u}{\sqrt{C_1}} \right) = C_1t + C_2$$</p>
<p>$$ u = \sqrt{C_1}tanh\left( C_1t + C_2\right) $$</p>
|
1,443,680 | <p>In Quantum Mechanics one often deals with wavefunctions of particles. In that case, it is natural to consider as the space of states the space $L^2(\mathbb{R}^3)$. On the other hand, on the book I'm reading, there's a construction which it's quite elegant and general, however it is not rigorous. For those interested in seeing the book, it's "Quantum Mechanics" by Cohen-Tannoudji.</p>
<p>The book proceeds as follows: the first postulate of Quantum Mechanics states that for every quantum system there is one Hilbert space $\mathcal{H}$ whose elements describe the possible states of the system. The idea then is that $\mathcal{H}$ doesn't necessarily is a space of functions.</p>
<p>Indeed, Cohen defines (or doesn't define) $\mathcal{H}$ as the space of kets $|\psi\rangle\in \mathcal{H}$, being the kets just vectors encoding the states of the system.</p>
<p>The second postulate states that for each physically observable quantity there is associated one hermitian operator $A$ such that the only possible values to be measured are the eigenvalues of $A$ and such that</p>
<ol>
<li><p>If $A$ has discrete spectrum $\{|\psi_n\rangle : n \in \mathbb{N}\}$ then the probability of measuring the eigenvalue $a_n$ on the state $|\psi\rangle$ is $\langle \psi_n | \psi\rangle$ considering that $|\psi\rangle$ is normalized.</p></li>
<li><p>If $A$ has continuous spectrum $\{|\psi_{\lambda}\rangle : \lambda \in \Lambda\}$ then the probability density on state $|\psi\rangle$ for the possible eigenvalues is $\lambda \mapsto \langle \psi_\lambda | \psi\rangle$</p></li>
</ol>
<p>If, for example, the position operator $X$ for particle in one-dimension, exists, and if its eigenvectors are $|x\rangle$ with eigenvalues $x$, for each $x\in \mathbb{R}$, the probability density of position is $\langle x |\psi\rangle$ which is a function $\mathbb{R}\to \mathbb{C}$ and we recover the wavefunction.</p>
<p>This formulation, though, seems to be more general. In that case, wavefunction is just the information about one possible kind of measurement which we can obtain from the postulates. There is nothing special with it.</p>
<p>Now, although quite elegant and simple, this is not even a little rigorous. For example: the position operator hasn't been defined! It is just "the operator associated to position with continuous spectrum", but this doesn't define the operator. On the book, it is defined on the basis $\{|x\rangle\}$, but this set is defined in terms of it, so we get circular.</p>
<p>Another problem is that usually we are dealing with unbounded operators which are not defined on the whole of $\mathcal{H}$. And an even greater problem is that $\mathcal{H}$ was never defined!</p>
<p>I've been looking forward to find out how to make this rigorous, but couldn't find anything useful. Many people simply say that the right way is to consider always $L^2(\mathbb{R}^3)$, so that all of this talk is nonsense. But I disagree, I find it quite natural to consider this generalized version.</p>
<p>The only thing I've found was the idea of rigged Hilbert spaces, known also as Gel'fand triple. I've found not much material about it, but anyway, I didn't understand how it can be used to make this rigorous.</p>
<p>In that case, how does one make this idea of space of states, or space of kets, fully rigorous, overcoming the problems I found out, and possibly any others that may exist? Is it through the Gel'fand triple? If so, how is it done?</p>
| user91126 | 91,126 | <p><strong>First remark</strong>. QM postulates state that the Hilbert space is <em>separable</em>. Recall that <strong>Riesz-Fischer</strong> theorem ensures that separable Hilber spaces are all isometrically isomorphic, so it does not really matter which one you choose until you are speaking about the general theory. In concrete realizations, it is obviously useful pick the Hilbert that best reflects the properties of the system. </p>
<p><strong>Second remark</strong>. QM postulates don't <em>and</em> can't say how operators are made in specific situations. This is an experimental fact. They only say that these must be linear operators (generally unbounded) on $\mathcal H$, self-adjoint (not simply Hermitean!) if representing observables (i.e., quantities that can be actually measured on the system). It is a consequence of the noncommutative structure of the observables $C^*$-algebra that there exist couples of <em>incompatible</em> observables. (This leads to the Heisenberg principle.) Who says how operators are made? This is a prerogative of the <em>quantization procedure</em>. Such a procedure establishes a correspondence between classical and quantum observables, making precise the intuitive idea by Dirac. In a precise sense (<strong>Groenewold theorem</strong>), <em>does not</em> exists a "universal" quantization procedure (again, observables are experimental). One requires that $X$ and $P$ must be implemented one as a multiplication operator and the other as a differential operator (which one is a matter of taste, leading to the so-called <em>x</em>-representation and <em>p</em>-representation), continuous and essentially self-adjoint when considered on rapidly decreasing functions, self-adjoint (not both bounded) when considered on a maximal domain of $L^2$, satisfaying canonical commutation rules. [For quantities that have not a classical counterpart, such as spin, the actual form is induced extrapolating by experimental data algebraic properties such as commutation rules, spectra and so on.]</p>
<p><strong>Third remark</strong>. Wave functions are a very peculiar kind of state. $\mathcal H$ has a lot of more other elements. Precisely, let $\mathcal A$ be the $C^*$-algebra of the physical system. <strong>Gleason theorem</strong> establishes a 1-1 correspondence between trace-class operators in $\mathcal A$ and the <em>rays</em> of a projective Hilbert space $\mathcal H$. The so-called wave functions are associated to the projectors of the form $(\psi, \, ) \psi$ (round brackets stands for scalar product in $\mathcal H$, $\psi \in \mathcal H$.). Physicists refer to $\psi$ as the wave function and this is misleading and, strictly speaking, uncorrect. </p>
<p>Gel'fand triple and Gel'fand-Naimark-Segal (GNS) construction constitute a way of automatically get the correct Hilbert space for the system. However, it is not something you can hope to find in a QM textbook for physicists, since the mathematical apparatus of QM is so strong that usually one can forget technical details, "canonically" realize the Hilbert space and the observables and perform actual calculations, which are the only important things in physics.</p>
<p>If you are interested in the last part, you can see </p>
<p>[1] Bogolioubov (et alii), <em>Axiomatic quantum field theory</em></p>
<p>[2] Lansdmann, <em>Mathematical topics between classical and quantum mechanics</em></p>
<p>[3] Dixmier, <em>$C^*$-algebras</em> (without applications to QM)</p>
|
1,196,317 | <p><a href="https://math.stackexchange.com/questions/1196261/let-g-be-a-group-where-ab3-a3b3-and-ab5-a5b5-prove-that-g-is/1196295#1196295">Let $G$ be a group, where $(ab)^3=a^3b^3$ and $(ab)^5=a^5b^5$. How to prove that $G$ is an abelian group?</a></p>
<p>P.S Why cannot not we just cancel ab out of the middle of these expressions? Why can we only cancel "on the left" and "on the "right"? Could somebody explain that to me (if it is possible, could you refer to definitions/ theorems when doing that)?</p>
<p>We can multiply the expressions by $a^{-1}$ and $b^{-1}$ , as we have a group and it is one of the properties of a group? Am I right?</p>
| Tim Raczkowski | 192,581 | <p>When we have $xyz=x'yz'$ and cancel out the middle factor, we can only do that if we know that the commutes with $x$ and $x'$, or with $z$ and $z'$. From being so accustomed to working with commutativity, we lose sight of this fact. In reality, we can only cancel from the left or right, and the only way we can do more is if we can move a factor to the left or right.</p>
<p>As for your problem,
$${(ab)^5\over (ab)^3}={a^5b^5\over a^3b^3}=a^2b^2.$$</p>
<p>But, $${(ab)^5\over(ab)^3}=abab.$$</p>
<p>Hence, $$abab=(ab)^2.$$
Now, multiply on the left by $a^{-1}$ and on the right by $b^{-1}$.</p>
|
499,171 | <p>Let $\{x_n\}$ be "any" sequence containing all rationals. I have to prove that every real number is the limit of some subsequence. I know that rationals are dense in real. But, are not the order of the rationals in the sequence creating problem here ? How to pick rationals from this sequence. </p>
| Old John | 32,441 | <p>Suppose that $A$ is any real number, and suppose that $x_{n_k}$ is the subsequence (that we will construct).</p>
<p>Suppose we have found the terms $x_{n_1}, x_{n_2}, \dots, x_{n_m}$ and we show that it is possible to choose a term closer to $A$ than previous terms of the subsequence:</p>
<p>Let $\epsilon$ be the smallest distance between $A$ and any of the (finitely many) terms of the subsequence chosen so far - and also less than $1/2^m$, and note that there are infinitely many rationals in the interval $(A-\epsilon/2, A+\epsilon/2)$. This infinite collection of rationals exist somewhere amongst the sequence $\{x_n\}$, so there must be one of the $x_n$ with $n > n_m$ in the interval, and this gives the next term of the subsequence.</p>
<p>Since the choice of the first few terms is irrelevant for the limit, and it is clear that the subsequence converges to $A$, we are done.</p>
|
499,171 | <p>Let $\{x_n\}$ be "any" sequence containing all rationals. I have to prove that every real number is the limit of some subsequence. I know that rationals are dense in real. But, are not the order of the rationals in the sequence creating problem here ? How to pick rationals from this sequence. </p>
| user2345678 | 314,957 | <p>I think that i've managed another way to show this result. </p>
<p>We will use two facts: $1)$ Rationals numbers are dense in the reals and $2)$ if $(x_n)$ is a sequence in $\mathbb{R}$ such that every open ball (centered at $a$) contains terms $x_n$ of the sequence for arbitrarely large $n$, then $a$ is the limit of some subsequence of $(x_n)$.</p>
<p>Let $\phi: \mathbb{N} \rightarrow \mathbb{Q} $ be any function that enumerates all the rational numbers. Then $\phi(\mathbb{N}) = \mathbb{Q}$ and since the rationals are dense in the reals, it follows that $\overline{\phi(\mathbb{N})} = \mathbb{R}$. Choose any $a\in \mathbb{R}$. Therefore, for every $\epsilon>0 $ it is true that $B(a;\epsilon)\cap \phi(\mathbb{N}) \neq \emptyset$. Take any of those open balls, say $B(a;\epsilon_0).$ It remains to prove that $B(a;\epsilon_0)$ contains infenitely many terms of the sequence $\phi_n$. Hence, supposing by contradiction that $B(a;\epsilon_0)$ contains only finitely many terms of $(\phi_n)$, say: $\phi_{n_1},\dots ,\phi_{n_m}$, take $r = \mbox{min}\{d(a,\phi_{n_1}),\dots, d(a,\phi_{n_m})\}$ and set $B' = B(a;r)$. Therefore, by construction $B'$ is a ball in $\mathbb{R}$ which contains no rational number, a contradiction with the fact that the rationals are dense in the reals. </p>
<p>By the arbitrary choice of the open ball, it follows $B(a;\epsilon)$ contains infinetely many terms of of $\phi_n$ for every $\epsilon>0$. Therefore, using $2)$ we have proven that $a$ is the limit point of some subsequence of $\phi$. </p>
|
297,036 | <p>If $f'(x) = \sin{\dfrac{\pi e^x}{2}}$ and $f(0)= 1$, then what will be $f(2)$?</p>
<p>This is what I tried to find the antiderivative of $f'(x)$ with u-substitution, </p>
<p>$$
\begin{align}
u &=\frac{\pi e^x}{2} \\
\frac{2}{\pi}du &=e^x dx
\end{align}
$$</p>
<p>I don't know what to do next.</p>
| userX | 61,346 | <p>another option.. you could go with a taylor expansion of sin(x) take the integral, and get f(2) [in form of an infinite series]. </p>
|
1,256,460 | <p>I want to solve the following problem: </p>
<p>$$u_{xx}(x,y)+u_{yy}(x,y)=0, 0<x<\pi, y>0 \\ u(0,y)=u(\pi, y)=0, y>0 \\ u(x,0)=\sin x +\sin^3 x, 0<x<\pi$$ </p>
<p>$u$ bounded </p>
<p>I have done the following: </p>
<p>$$u(x,y)=X(x)Y(y)$$ </p>
<p>We get the following two problems: </p>
<p>$$X''(x)+\lambda X(x)=0 \ \ \ \ \ (1) \ X(0)=X(\pi)=0$$ </p>
<p>$$Y''(y)-\lambda Y(y)=0 \ \ \ \ \ (2)$$ </p>
<p>To solve the problem $(1)$ we do the following: </p>
<p>The characteristic polynomial is $\mu +\lambda =0$. </p>
<ul>
<li><p>$\lambda <0$: $\mu=\pm \sqrt{-\lambda}$</p>
<p>$X(x)=c_1e^{\sqrt{-\lambda}x}+c_2e^{-\sqrt{-\lambda}x}$ </p>
<p>$X(0)=0 \Rightarrow c_1+c_2=0 \Rightarrow c_1=-c_2$ </p>
<p>$X(\pi)=0 \Rightarrow c_1e^{\sqrt{-\lambda}\pi}+c_2 e^{-\sqrt{-\lambda}\pi}=0 \Rightarrow c_2(-e^{\sqrt{-\lambda}\pi}+e^{-\sqrt{-\lambda}\pi})=0 \Rightarrow c_1=c_2=0$ </p></li>
<li><p>$\lambda=0$: </p>
<p>$X(x)=c_1 x+c_2$ </p>
<p>$X(0)=0 \Rightarrow c_2=0 \Rightarrow X(x)=c_1x$ </p>
<p>$X(\pi)=0 \Rightarrow c_1 \pi=0 \Rightarrow c_1=0$ </p></li>
<li><p>$\lambda >0$ : </p>
<p>$X(x)=c_1 \cos (\sqrt{\lambda} x)+c_2 \sin (\sqrt{\lambda}x)$ </p>
<p>$X(0)=0 \Rightarrow c_1=0 \Rightarrow X(x)=c_2 \sin (\sqrt{\lambda}x)$ </p>
<p>$X(\pi)=0 \Rightarrow \sin (\sqrt{\lambda}\pi)=0 \Rightarrow \sqrt{\lambda}\pi=k\pi \Rightarrow \lambda=k^2$ </p></li>
</ul>
<p>For the problem $(2)$ we have the following: </p>
<p>$Y(y)=c_1 e^{ky}+c_2 e^{-ky}$ </p>
<p>The general solution is the following: </p>
<p>$$u(x,y)=\sum_{k=0}^{\infty}a_n( e^{ky}+ e^{-ky}) \sin (kx) $$ </p>
<p>$$u(x,0)=\sin x+\sin^3 x=\sin x+\frac{3}{4}\sin x-\frac{1}{4}\sin (3x)=\frac{7}{4}\sin x-\frac{1}{4}\sin (3x) \\ \Rightarrow \frac{7}{4}\sin x-\frac{1}{4}\sin (3x)=\sum_{k=0}^{\infty}2a_n\sin (kx) \\ \Rightarrow 2a_1=\frac{7}{4} \Rightarrow a_1=\frac{7}{8}, 2a_3=-\frac{1}{4}=-\frac{1}{8}, a_k=0 \text{ for } k=2,4,5,6,7, 8, \dots $$ </p>
<p>Is this correct?? </p>
| Community | -1 | <p>We have</p>
<p>$$\frac1t\int_0^t e^{-t/\tau}dt=\frac\tau t(1-e^{-t/\tau}).$$</p>
<p>Then with the initial temperature difference $\Delta_0=T_0-T_\infty$ and the difference at time $t,$$$\Delta_t=T_t-T_\infty=(T_0-T_\infty)e^{-t/\tau}=\Delta_0e^{-t/\tau},$$ the average is given by</p>
<p>$$\overline\Delta=\Delta_0\frac\tau t(1-e^{-t/\tau})=\Delta_0\frac\tau t-\Delta_t.$$</p>
|
908,196 | <blockquote>
<p>Solve $x^2-1=2$</p>
</blockquote>
<p>I have no idea how to do this can somebody please help me? I have tried working it out and I could never get the answer.</p>
| Mary Star | 80,708 | <p>$$x^2-1=2 \Rightarrow x^2=1+2 \Rightarrow x^2=3 \Rightarrow \sqrt{x^2}=\sqrt{3} \Rightarrow |x|=\sqrt{3} \Rightarrow x=\pm \sqrt{3}$$</p>
|
908,196 | <blockquote>
<p>Solve $x^2-1=2$</p>
</blockquote>
<p>I have no idea how to do this can somebody please help me? I have tried working it out and I could never get the answer.</p>
| Eff | 112,061 | <p>I feel from the comments that you lack some understanding of equations in general, not just quadratic equation since this problem should be quite simple. Let's try and fix that!</p>
<p>First off, what does it mean to <em>solve</em> the equation $x^2-1=2$? It means to find the value(s) that we can plug into the $x$ such that the equality holds. For instance plugging $x=1$ into the expression on the left hand side yields $x^2-1=1^2-1=0$, so this is <em>not</em> a solution since it does not equal $2$ as we wished. Instead of just guessing for various $x$-values, one often uses operations such as addition, multiplication, etc on both sides of the equality to try and isolate $x$.</p>
<p>Let's try to isolate $x$ in the equation using this method.</p>
<p>$$x^2-1=2$$</p>
<p>We want $x$ to alone on one side so first off we want to get rid of the $-1$, we do this by adding $1$ to each side.</p>
<p>\begin{align}x^2-1+1&=2+1 \implies\\
x^2&=3
\end{align}</p>
<p>Next off we want to get rid of the square. We use the opposite operation, square root:</p>
<p>\begin{align}\sqrt{x^2}&=\sqrt{3} \implies\\
x&=\pm\sqrt{3}
\end{align}</p>
<p>Now $x$ is isolated and we are left with the solution. Either $x=\sqrt{3}$ or $x=-\sqrt{3}$.</p>
<p>Let's do a quick calculation of the other example.</p>
<p>\begin{align}8x^2-200&=0 &\text{ Add 200 to each side}\\
8x^2 &=200 &\text{ Divide by 8}\\
x^2 &= \frac{200}{8} &\text{ Simplify}\\
x^2 &= 25 &\text{ Square root}\\
x &= \pm\sqrt{25}&\text{ Simplify}\\
x &= \pm 5.
\end{align}
And that is the solution.</p>
|
908,196 | <blockquote>
<p>Solve $x^2-1=2$</p>
</blockquote>
<p>I have no idea how to do this can somebody please help me? I have tried working it out and I could never get the answer.</p>
| Community | -1 | <p>$x^2-1=2$ Initial Problem<br>
$x^2=3$ Add 1 to both sides.<br>
$x= \sqrt{3} , -\sqrt{3}$ Square root both sides and you have your two solutions</p>
|
745,436 | <p>I'm reading this pdf <a href="http://rutherglen.science.mq.edu.au/wchen/lndpnfolder/dpn01.pdf" rel="nofollow">http://rutherglen.science.mq.edu.au/wchen/lndpnfolder/dpn01.pdf</a> I understand some of the expression used in this but I don't understand the part $(m,n) = 1$</p>
<p>Is this a cartesian coordinate or some sort of operation?</p>
| Yiyuan Lee | 104,919 | <p>It means that the <a href="http://en.wikipedia.org/wiki/Greatest_common_divisor" rel="nofollow">greatest common divisor</a> of $m$ and $n$, which is the largest integer dividing both of $m$ and $n$ is equals to $1$.</p>
<p>In otherwords, they are <a href="http://en.wikipedia.org/wiki/Coprime_integers" rel="nofollow">coprime</a>, since $1$ is the largest integer dividing both of them.</p>
|
745,436 | <p>I'm reading this pdf <a href="http://rutherglen.science.mq.edu.au/wchen/lndpnfolder/dpn01.pdf" rel="nofollow">http://rutherglen.science.mq.edu.au/wchen/lndpnfolder/dpn01.pdf</a> I understand some of the expression used in this but I don't understand the part $(m,n) = 1$</p>
<p>Is this a cartesian coordinate or some sort of operation?</p>
| user140943 | 140,943 | <p>$(m,n)$ is the gcd of m and n. </p>
|
39,597 | <p>There was a recent question on intuitions about sheaf cohomology, and I answered in part by suggesting the "genetic" approach (how did cohomology in general arise?). For historical material specific to sheaf cohomology, what Houzel writes in the Kashiwara-Schapira book <em>Sheaves on Manifolds</em> for sheaf theory 1945-1958 should be adequate.</p>
<p>The question really is about the earlier period 1935-1938. According to nLab, cohomology with local coefficients was proposed by Reidemeister in 1938 (<a href="http://ncatlab.org/nlab/show/history+of+cohomology+with+local+coefficients">http://ncatlab.org/nlab/show/history+of+cohomology+with+local+coefficients</a>). The other bookend comes from Massey's article in <em>History of Topology</em> edited by Ioan James, suggesting that from 1895 and the inception of homology, it took four decades for "dual homology groups" to get onto the serious agenda of topologists. It happens that 1935 was also the date of a big international topology conference in Stalin's Moscow, organised by Alexandrov. This might be taken as the moment at which cohomology was "up in the air".</p>
<p>Now de Rham's theorem is definitely somewhat earlier. Duality on manifolds is quite a bit earlier in a homology formulation. </p>
<p>It is apparently the case that <em>At the Moscow conference of 1935 both Kolmogorov and Alexander announced the definition of cohomology, which they had discovered independently of one another.</em> This is from <a href="http://www.math.purdue.edu/~gottlieb/Bibliography/53.pdf">http://www.math.purdue.edu/~gottlieb/Bibliography/53.pdf</a> at p. 11, which then mentions the roles of Čech and Whitney in the next couple of years. This is fine as a narrative, as far as it goes. I have a few questions, though:</p>
<p>1) Is the axiomatic idea of cocycle as late as Eilenberg in the early 1940s?</p>
<p>2) What was the role of obstruction theory, which produces explicit cocycles?</p>
<p>Further, Weil has his own story. Present at the Moscow conference and in the USSR for a month or so after, his interest in cohomology was directed towards the integration of de Rham's approach into the theory. He comments in the notes to his works that he pretty much rebuffed Eilenberg's ideas. Bourbaki was going to write on "combinatorial topology" but the idea stalled (I suppose this is related). So I'd also like to understand better the following:</p>
<p>3) Should we be accepting the topologists' history of cohomology, if it means restricting attention to the "algebraic" theory, or should there be more differential topology as well as sheaf theory in the picture?</p>
<p>As said, restriction to a short period looks like a good idea to get some better grip on this chunk of history.</p>
| Tim Perutz | 2,356 | <p>What strikes me about the first fifty years of homology theory (from Poincaré to Eilenberg-Steenrod's book) is that the development was as much about stripping away unnecessary complication as about increasing sophistication. A famous example is singular homology, which was found very late, by Eilenberg. The construction as we know it presumably seemed too naive to Lefschetz, who misguidedly devised a theory of oriented simplices, and inadequate to those who were interested in general (not locally path connected) metric spaces. </p>
<p>I want to suggest that this process of stripping away is relevant to the introduction of cohomology and its product. (Cf. Dieudonné's "History of algebraic and differential topology", pp.78-81). I won't directly answer the questions, but will suggest that one of the motivations for cohomology came from an application of Pontryagin duality which was rendered obsolete by the new theory.</p>
<p>Alexander wrote up his Moscow conference talk, with improvements suggested by Cech, in a 1936 Annals paper (vol. 37 no. 3) (<a href="http://jstor.org/stable/1968484?seq=1">JSTOR link</a>). In it, he proposes the cohomology ring ("connectivity ring") as a fundamental homological invariant of a space. In the introduction he hints at the line of thought that led him to the cohomology ring. The relation between cycles and differential forms is mentioned (without citation of de Rham), but what looks more surprising to modern eyes is the comment that the theory of cycles "has been very greatly perfected by Pontrjagin's cycles with real coefficients reduced modulo 1". </p>
<p>Pontryagin had recently developed his duality theory for locally compact abelian groups (<a href="http://www.jstor.org/stable/1968438">Annals, 1934</a>) in order to apply it to Alexander duality (again, <a href="http://www.jstor.org/stable/1968501">Annals, 1934</a>). If $K$ is a compact polyhedral complex in $\mathbb{R}^n$, there is a linking form which gives a pairing between $k$-cycles of $K$ and $(n-k-1)$-cycles of $\mathbb{R}^n-K$ and, in modern terms, induces an isomorphism of $H_k(K)$ with $H^{n-k-1}(\mathbb{R}^n-K)$. Alexander's formulation equated the Betti numbers over a field (mod 2, initially - Dieudonné p. 57) of $K$ and its complement, but it was understood that the full homology groups of $K$ and $\mathbb{R}^n-K$ need not be isomorphic. Pontryagin showed that if one takes a Pontryagin-dual pair of metric abelian groups, say $\mathbb{Z}$ and $\mathbb{T}$, so that each is the character group of the other, then $H_k(K;\mathbb{T})$ is Pontryagin-dual to $H_{n-k-1}(\mathbb{R}^n-K;\mathbb{Z})$ via the linking form.</p>
<p>From Alexander's introduction:</p>
<blockquote>
Now, if we use Pontrjagin's cycles, the $k$th connectivity [homology] group of a compact, metric space becomes a compact, metric group. Moreover, by a theorem of Pontrjagin, every such group may be identified with the character group of a countable, discrete group. This immediately suggests the advisability of regarding the discrete group, rather than its equivalent (though more complicated) metric character group as the $k$th invariant of a space.... One decided advantage of taking the discrete groups...as the fundamental connectivity groups of a space is that we can then take the product...of two elements of the same or different groups.
</blockquote>
<p>Guided by Pontryagin's generalisation of his own duality theorem, Alexander finds a simple construction that supersedes Pontryagin's as a basic invariant. (The universal coefficient theorem gives a modern perspective on why Pontryagin's choice of coefficient groups works. I must admit, his formulation of duality is very clean.)</p>
<p>Can anyone comment on Kolmogorov's route to cohomology?</p>
<p>ADDED. On obstruction theory: Charles Matthews's comments draw attention to a 1940 paper of Eilenberg. The MathSciNet review of that paper (by Hurewicz, whose homotopy groups, useful for obstruction theory, date from 1935-36) points me to its 1937 forerunner by Whitney, "The maps of an $n$-complex into an $n$-sphere" (Duke M.J. 3 (no.1), 51-55). This work, too, was presented at the Moscow conference in 1935. Though the topic is different, Whitney's introduction closely resembles Alexander's: </p>
<blockquote>
The classes of maps of an $n$-complex into an $n$-sphere were classified by H. Hopf in 1932.
Recently, Hurewicz [1935-6] has extended this theorem by replacing the sphere with more general spaces. Freudenthal [1935] and Steenrod have noted that the theorem and proof are simplified by using real numbers reduced mod 1 in place of integers as coefficients in the chains considered. We shall give here a statement of the theorem that seems most natural; the proof is quite simple.... The fundamental tool of the paper is the notion of "coboundary"; it has come into prominence in the last few years.
</blockquote>
|
3,479,953 | <p>Let <span class="math-container">$v={\{v_1,v_2,...,v_k}\}$</span> Linearly independent</p>
<p><span class="math-container">$\mathbb{F} = \mathbb{R}$</span> or <span class="math-container">$\mathbb{F}=\mathbb{C}$</span></p>
<blockquote>
<p>Prove that <span class="math-container">${\{v_1 + v_2 , v_2+v_3, v_3+v_4,....,v_{k-1}+v_k,v_k+v_1}\}$</span> is Linearly Independent if and only if <span class="math-container">$k$</span> is odd number </p>
</blockquote>
<p><span class="math-container">$A= S.S$</span> matrix </p>
<p><span class="math-container">$A= \begin{pmatrix}1&1&0&.&.&.&.&0\\ 0&1&1&0&.&.&.&0\\ .&.&&&&&&.\\ .&&.&&&&&.\\ .&&&.&&&&.\\ .&&&&&&&.\\ 0&0&.&.&.&.&1&1\\ 1&0&.&.&.&.&.&1\end{pmatrix}$</span></p>
<p>to prove this using Determinant I get <span class="math-container">$|A|=1+(-1)^{k+1}$</span></p>
<p>so if <span class="math-container">$k$</span> is odd number <span class="math-container">$k+1$</span> is even and <span class="math-container">$|A|=2 \rightarrow A$</span> is Invertible <span class="math-container">$\rightarrow$</span> <span class="math-container">$S$</span> Linearly Independent
...</p>
<p>but any Idea how to prove this without using Determinant ?</p>
<p>thanks</p>
| Alex | 48,061 | <p>Let <span class="math-container">$A$</span> be not the <span class="math-container">$n\times d$</span>-matrix, but the <span class="math-container">$n\times n$</span>-matrix <span class="math-container">$$A=[v_1 \ldots v_d\ \vec{0}\ldots\ \vec 0]^t.$$</span> Reduce <span class="math-container">$A$</span> to <a href="https://en.wikipedia.org/wiki/Smith_normal_form" rel="nofollow noreferrer">Smith Normal Form</a>. This takes <span class="math-container">$O(n\cdot d)$</span> row operations. The number of non-zero rows of the resulting matrix equals the number of these vectors that were linearly independent.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.