qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
177,760 | <p>Can you tell me what is the difference between <strong>empirical distribution</strong> and <strong>classical probability</strong>?
My teacher has told me that when we take limit empirical distribution will get a constant value </p>
<p>$$P(A)=\lim_{N\rightarrow\infty}f(A)=\lim_{N\rightarrow \infty}\frac{N(A)}{N}= \mathrm{constant}$$</p>
<p>where $F(A)$ is the frequency ratio, $N(A)$ is the number of times Event $A$ is found to occur and $N$ is the number of times random experiment repeated</p>
<p>But classical probability will give</p>
<p>$$P(A)=\lim_{n\rightarrow \infty }\frac{m}{n}= 0$$</p>
<p>but what I know of limit is like this</p>
<p>$$\lim_{x\rightarrow\infty}\frac{1}{x}=0$$</p>
<p>Then how come empirical distribution is giving a constant instead of zero?</p>
<p>And last can you explain what and why we use <strong>axiomatic definition</strong>?</p>
<p>Advance thanks for your help... I am a newb to probability statistics</p>
| Michael R. Chernick | 30,995 | <p>The empirical distribution is the distribution of the sample or sample estimate. </p>
<p>In your context the distribution here $p=$ # of successes over number of trials and $1-p=$ number of failures over number of trials of the empirical distribution for your estimate of the binomial proportion parameter. It converges in probability to a constant as the sample size goes to infinity. </p>
<p>By classic probability the person just means ordinary probability theory. But if $m=N(A)$ and $n=N$ the convergence is <strong>not</strong> to $0$. It converges to the constant and $m$ goes to infinity with $n$. If $m$ is a constant or converges to a finite limit then $m/n$ will go to $0$ as $n$ goes to infinity. There should be no contradictory result. </p>
<p>To discuss axiomatic definition you need to put the term in context. </p>
|
177,760 | <p>Can you tell me what is the difference between <strong>empirical distribution</strong> and <strong>classical probability</strong>?
My teacher has told me that when we take limit empirical distribution will get a constant value </p>
<p>$$P(A)=\lim_{N\rightarrow\infty}f(A)=\lim_{N\rightarrow \infty}\frac{N(A)}{N}= \mathrm{constant}$$</p>
<p>where $F(A)$ is the frequency ratio, $N(A)$ is the number of times Event $A$ is found to occur and $N$ is the number of times random experiment repeated</p>
<p>But classical probability will give</p>
<p>$$P(A)=\lim_{n\rightarrow \infty }\frac{m}{n}= 0$$</p>
<p>but what I know of limit is like this</p>
<p>$$\lim_{x\rightarrow\infty}\frac{1}{x}=0$$</p>
<p>Then how come empirical distribution is giving a constant instead of zero?</p>
<p>And last can you explain what and why we use <strong>axiomatic definition</strong>?</p>
<p>Advance thanks for your help... I am a newb to probability statistics</p>
| KAT | 68,473 | <p>The difference between the classical definition and the empirical are similar to the difference between a theory and an experiment in physics. The theory is developed in an abstract (perfect) way while the experiments are practical observations.
The same happens with probability. </p>
<p>In the classic definition of probability of an event you will assign a probability to an event based on abstract thinking. For example : what is the probability of getting the result 2 when perform the calculation $\frac{2x}{2}$, where $x=1,2,\dots,100$? There are $n=100$ possible outcomes but the result 2 will only occur once, so $m=1$. This is because we can write the relation as $\frac{2x}{2}=\frac{2}{2}x=x$, so only $x=2$ gives the right result. In this case we will say that the probability is $1/100$. So, in classical probability you think of the space of the outcomes and try to find an abstract reason to assign the probability (we used mathematics logic to came up with the number of possibilities and the one of outcomes).</p>
<p>In the empirical definition, on the other hand, you don't think, you just do experiments and count. So, to solve the last problem , you will do as many calculations as you can from the 100 possible and count how many times you get 2. For example, if you perform this experiment on the first 10 numbers (N=10)you will get only once the result of 2 , (N(A)=1), so your estimate for the probability will be $\frac{1}{10}$. This is not the right probability, but more experiments you do, better the estimate is.Closer you get to exhausting the number of possible outcomes closer you are to the true probability. </p>
<p>Now, everything is fine when the possible outcomes are a in finite number. The classical approach gives the right result but might require complex thinking, while the empirical approach gives without effort an estimate that will improve with the number of "measurements/experiments".</p>
<p>What about when you have an infinite number of outcomes? For example: what is the probability of selecting the number 6 from a box with all the natural numbers from 1 to 100? What if in the box there are the numbers to n=10.000, or n=10000000000.......0?
The classic definition has an answer for you. Since 6 is unique, the probabilities are $\frac{1}{100}$, $\frac{1}{10.000}$ and $\frac{1}{10000000....0}$. The last probability is almost zero, which is the case "$P(A)=\lim_{n\rightarrow \infty}\frac{m}{n}=0$".</p>
<p>The empirical definition will never give you a good answer for this question since it won't ever be able to exhaust the possible outcomes. If in N tries the experimenter doesn't select the number 6, then the probability will be indeed $\frac{N(A)}{N}=\frac{0}{N}=0$, but the results was "correct" only by the chance of the experimenter. Instead, if she selects 6 at the beginning of the experiment, the result is $\frac{N(A)}{N}=\frac{1}{N}$ and the experimenter will get closer to the result only after a tremendous number of experiments. We need to notice here that there is never in the goal of the empiricist to reach infinity ($N\rightarrow\infty$)since she is always working with finite samples, she doesn't look for perfect knowledge but for useful approximations. </p>
<p>Another example : what is the probability of tails when flipping a fair coin?
The classic approach will argue that the probability of "tails" in one flip is $1/2$ because there are only two possible outcomes and "tails" is one of them $\frac{m}{n}=\frac{1}{2}$.
The empiricist will do N experiments and will count how many times A=tails occurs and finds $\frac{N(tails)}{N}$. This will always give her a constant since N is always a finite number of experiments.</p>
<p>Apart from the discourse whether the empiricist wants to reach infinity, by the law of large numbers the average result from a large number of experiments will get closer to the expected value of the phenomenon studies. By this law $\lim_{N\rightarrow\infty}\frac{N(A)}{N}$ will $ converge $ (not equal) to the expected probability of the event A, which is a constant.</p>
<p>The axiomatic definitions are conceived in an abstract perfect manner such that no mathematical contradiction can occur. This makes possible building a solid theory by using mathematical logic. The probability axioms where first proposed by Kolmogorov and can be found here <a href="http://en.wikipedia.org/wiki/Probability_axioms" rel="nofollow">http://en.wikipedia.org/wiki/Probability_axioms</a>. </p>
|
1,824,638 | <p>The figure shows a piece of string tied to a circle with a radius of one unit. The string is just long enough to reach the opposite side of the circle. Find the area of the region, not including the circle itself that is traced out when the string is unwound counterclockwise and continues counterclockwise until it reaches the opposite side again.</p>
<p><a href="https://i.stack.imgur.com/KzJRQ.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KzJRQ.gif" alt="enter image description here"></a></p>
| Mike Pierce | 167,197 | <p>I'm mostly condensing what Eric Weisstein wrote in <a href="http://mathworld.wolfram.com/GoatProblem.html" rel="nofollow noreferrer">this <em>MathWorld</em> article</a>, citing a 1998 paper by Hoffman, <em>The Bull and the Silo: An Application of Curvature</em>. You can solve this by finding the area of the semicircle the string traces (before the string begins to wrap) and then do some fancy calculus to find the area of the regions formed when the string begins to wrap around the circle. </p>
<p>By the calculations in the article, given a circle of radius <span class="math-container">$a$</span> and a string of length <span class="math-container">$L$</span>, the area <span class="math-container">$A$</span> will be
<span class="math-container">$$A = \frac{\pi L^2}{2} + \frac{L^3}{3a}\;.$$</span></p>
<p>So in your case where <span class="math-container">$a=1$</span> and <span class="math-container">$L=\pi$</span> we have <span class="math-container">$A = \frac{5}{6}\pi^3 \;\approx\; 25.838564$</span>.</p>
|
1,824,638 | <p>The figure shows a piece of string tied to a circle with a radius of one unit. The string is just long enough to reach the opposite side of the circle. Find the area of the region, not including the circle itself that is traced out when the string is unwound counterclockwise and continues counterclockwise until it reaches the opposite side again.</p>
<p><a href="https://i.stack.imgur.com/KzJRQ.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KzJRQ.gif" alt="enter image description here"></a></p>
| Riccardo Orlando | 335,442 | <p>Well, I cheated, and solved it numerically: it comes up at about 19.55.</p>
<p>Unfortunately that does not match up with Mike Pierce's answer of $\frac56 \pi$...</p>
<p>Here is a <a href="https://i.imgur.com/9PjSZ3b.png" rel="nofollow noreferrer">graph</a> of the interesting part, and here is the <a href="http://pastebin.com/5hWTvVjU" rel="nofollow noreferrer">Matlab code</a>.</p>
|
64,905 | <p>Let's see if we could use MO to put some pressure on certain publishers...</p>
<p>Although it is wonderful that it has been put
<a href="http://www.jmilne.org/math/Books/DMOS.pdf" rel="nofollow">online</a>, I think it would make an even greater read if "Hodge Cycles, Motives and Shimura Varieties" by Deligne, Milne, Ogus and Shih would be (re)written in the latex typesetting (well, if I could understand its content..).</p>
<p>But enough about my opinion, what do you think? Which book(s) would you like to see "texified"? As customary in a CW question, one book per answer please.</p>
| Quanta | 13,121 | <p>Marcus - Number Fields</p>
|
64,905 | <p>Let's see if we could use MO to put some pressure on certain publishers...</p>
<p>Although it is wonderful that it has been put
<a href="http://www.jmilne.org/math/Books/DMOS.pdf" rel="nofollow">online</a>, I think it would make an even greater read if "Hodge Cycles, Motives and Shimura Varieties" by Deligne, Milne, Ogus and Shih would be (re)written in the latex typesetting (well, if I could understand its content..).</p>
<p>But enough about my opinion, what do you think? Which book(s) would you like to see "texified"? As customary in a CW question, one book per answer please.</p>
| timur | 824 | <p>Michael Reed. <em>Abstract Non Linear Wave Equations</em>.</p>
|
64,905 | <p>Let's see if we could use MO to put some pressure on certain publishers...</p>
<p>Although it is wonderful that it has been put
<a href="http://www.jmilne.org/math/Books/DMOS.pdf" rel="nofollow">online</a>, I think it would make an even greater read if "Hodge Cycles, Motives and Shimura Varieties" by Deligne, Milne, Ogus and Shih would be (re)written in the latex typesetting (well, if I could understand its content..).</p>
<p>But enough about my opinion, what do you think? Which book(s) would you like to see "texified"? As customary in a CW question, one book per answer please.</p>
| hce | 2,781 | <p>Palais - Foundations of global non–linear analysis</p>
|
64,905 | <p>Let's see if we could use MO to put some pressure on certain publishers...</p>
<p>Although it is wonderful that it has been put
<a href="http://www.jmilne.org/math/Books/DMOS.pdf" rel="nofollow">online</a>, I think it would make an even greater read if "Hodge Cycles, Motives and Shimura Varieties" by Deligne, Milne, Ogus and Shih would be (re)written in the latex typesetting (well, if I could understand its content..).</p>
<p>But enough about my opinion, what do you think? Which book(s) would you like to see "texified"? As customary in a CW question, one book per answer please.</p>
| Michael Hardy | 6,316 | <p>Robin Hartshorne's lecture notes on projective geometry. This appeared as a book and is now out of print. The pages appear to be photographs of pages produced with a typewriter, plus hand-drawn illustrations.</p>
<p>Maybe a wiki should be set up where volunteers can transcribe from the book. Permission from copyright owners might be easy to get if they're not interested in continuing to publish it themselves, and if they are, an attempt to get permission for such a wiki might pressure them to put it back in print with better typesetting.</p>
|
2,031,842 | <p>If 3 is the remainder when dividing $P(x)$ with $(x-3)$, and $5$ is the remainder when dividing $P(x)$ with $(x-4)$, what is the remainder when dividing $P(x)$ with $(x-3)(x-4)$?</p>
<p>I'm completely puzzled by this, I'm not sure where to start...</p>
<p>Any hint would be much appreciated. </p>
| Siong Thye Goh | 306,553 | <p>Hint:
$$P(x)=Q(x)(x-3)(x-4)+Ax+B$$
$$P(3)=3$$
$$P(4)=5$$</p>
<p>Can you solve for $A$ and $B$?</p>
|
2,031,842 | <p>If 3 is the remainder when dividing $P(x)$ with $(x-3)$, and $5$ is the remainder when dividing $P(x)$ with $(x-4)$, what is the remainder when dividing $P(x)$ with $(x-3)(x-4)$?</p>
<p>I'm completely puzzled by this, I'm not sure where to start...</p>
<p>Any hint would be much appreciated. </p>
| Rene Schipperus | 149,912 | <p>Let $$P(x)=(x-4)(x-3)Q(x)+ax+b$$</p>
<p>Now you have
$$3a+b=3$$ and $$4a+b=5$$ So solve for $a$ and $b$.</p>
|
2,217,928 | <p>I know that in infinite dimensional Hilbert spaces sometimes the best we can do is to find an orthonormal basis in the sense that any element in H can be approximated arbitrarily close in the NORM by a finite linear combination of this basis elements.</p>
<p>So then does that mean we can't expect that every x in H could be written as a finite linear combination of basis elements correct? So then we can't have things like $x= \sum_{i=1}^{\infty} a_k e_k $ for some $ a_k$ ARE constants in the underlying field, usually $\mathbb C$ (usually the projections of x on each $e_k$). So then how do we deal with linear transformations? For example how do we even define what a linear transformation does without explicitly saying what T(x) is for each x in H...ie how is saying what $T(e_k)$ is for each k enough to describe the whole linear transformation? </p>
<p>Thanks for answers to either question.. I see in a lot of proofs people writing x as this kind of infinite sum which confuses me since the infinite sum might not be in H..</p>
<p>Finally if we take some kind of infinite sum of elements in H, and the norm of that is finite, can we conclude the infinite sum is in H or that's still not enough?</p>
| Federico | 128,293 | <p>If $\{e_k\}$ is an orthonormal basis of your Hilbert space $H$, then every $x\in H$ can be expressed in the form:</p>
<p>\begin{equation}
x=\sum_{k} a_k e_k
\end{equation}</p>
<p>with, as you say, the coefficients $a_k$ in the field (most usually $\mathbb C$). It is important to note that the choice of the coefficients is unique, namely $a_k=<x,e_k>$. </p>
<p>The converse is not true in general: for a sum like the one above to be in $H$ it is necessary (and sufficient) that
$$
\sum_{k} |a_k|^2<\infty.
$$</p>
<p>Of course, that would be always true if the dimension of the space (the cardinality of $\{e_k\}$) is finite. Hence, the infinite dimensional case is the only one of actual interest in this context.</p>
<p>Now, if you have a linear transformation $T$ defined in each of the $e_k$ then that determines $T$ on the whole of $H$, provided that the transformation is bounded, that is when
$$
T(e_k)\leq M
$$
for some fixed constant $M>0$. </p>
<p>In this case there is no problem:
$$
T(x)=\sum_{k} a_k T(e_k)
$$
and the sum in the right hand side is guaranteed to converge in $H$.
It is readily verified that in the finite dimensional case every linear transformation is bounded, so again the case of interest is when the space is infinite dimensional.</p>
<p>However, if $T$ is not bounded, the most you can expect to have is an operator defined in a dense linear subspace of $H$ (called the domain of the operator, and denoted $D(T)$): this domain is the set of the $x$ for which the sum above converges in $H$.</p>
|
3,996,790 | <p>I just realized that there may be a case where L'Hopital's rule fails, specifically</p>
<p><span class="math-container">$$ \lim_{x \to \infty} \frac{e^x}{e^x}$$</span></p>
<p>which evaluates to an indeterminate form, specifically <span class="math-container">$\frac{\infty}{\infty}$</span>. Sure, we can cancel the <span class="math-container">$e^x$</span>s, but when we use L'Hopital's, we get</p>
<p><span class="math-container">$$ \lim_{x \to \infty} \frac{(e^x)^\prime}{(e^x)^\prime}$$</span></p>
<p>Since the derivative of <span class="math-container">$e^x$</span> is <span class="math-container">$e^x$</span>, we have</p>
<p><span class="math-container">$$ \lim_{x \to \infty} \frac{e^x}{e^x}$$</span></p>
<p>which is our original limit. Therefore, L'Hopital's fails to work in this example.</p>
<p>Question: Does L'Hopital's rule actually fail in this example, or am I understanding it wrong?</p>
<p>Edit: I mean "fails" in which it does not make progress toward a determinate result.</p>
| Countable | 677,593 | <p>Just to add the precise statement to explain clearly what is going on:</p>
<p><span class="math-container">$\textbf{Theorem - L'Hospital's Rule:}$</span> Let <span class="math-container">$a,b\in [-\infty,\infty]$</span>. Suppose that f and g are differentiable real valued maps on <span class="math-container">$(a,b)$</span>, with <span class="math-container">$g'(x)\neq 0$</span> on <span class="math-container">$(a,b)$</span>., and the following hold:</p>
<p>(i) <span class="math-container">$\lim_{x\to b}f(x) = \lim_{x\to b}g(x) = +\infty$</span></p>
<p>and (ii) <span class="math-container">$\lim_{x\to b}\frac{f'(x)}{g'(x)} = L\in [-\infty,\infty]$</span>,</p>
<p>then we can finally say that <span class="math-container">$\lim_{x\to b}\frac{f(x)}{g(x)} = L$</span>.</p>
<p>Certainly <span class="math-container">$e^x$</span> is <span class="math-container">$C^{\infty}$</span>, and the conditions (i) and (ii) hold with <span class="math-container">$L = 1$</span>, so our original limit is 1 also (not that we needed L'Hospital's rule for that, but there it is, it works here).</p>
|
556,664 | <p>For example, if $f$ is integrable on $[0,3]$, is it also integrable on $[1,2]$? I tried thinking of a counterexample but couldn't, since I've only learned what implies integrability but not what integrability implies.</p>
| kahen | 1,269 | <p>First note that if $a \leq a'$ and $b' \leq b$ and $g: [a,b] \to \mathbb R$ is $0$ outside of $[a',b']$, then $\int_a^b g(x)\, dx = \int_{a'}^{b'} g(x)\,dx$. This is easily shown using the definition of the Darboux (or Riemann, if you prefer that definition) integral using partitions (tagged partitions for the Riemann definition).</p>
<p>The product of Riemann–Darboux integrable functions are Riemann–Darboux integrable, so in particular you have (in your concrete example) that $\int_1^2 f(x)\, dx = \int_0^3 f(x) \chi_{[1,2]}(x)\, dx$, where $\chi_{[1,2]}$ denotes the characteristic function of the interval $[1,2]$, i.e. </p>
<p>$\displaystyle\qquad \chi_A(x) = \begin{cases} 1 & \text{if } x \in A \\ 0 & \text{if } x \notin A\end{cases}$</p>
|
3,853,896 | <p>Do I consider the probability before drawing both cards or after?</p>
<p><strong>Question more clearly</strong>:
A single card is removed at random from a deck of <span class="math-container">$52$</span> cards. From the remainder we draw <span class="math-container">$2$</span> cards at random and find that they are both spades. What is the probability that the first card removed was also a spade?</p>
| JMoravitz | 179,297 | <p>To emphasize, @Barry's answer is correct and is the easiest way to think of the answer.</p>
<p>Since that confuses people for whatever reason, a way to convince people of this is instead to approach directly via definitions.</p>
<p>Recall that <span class="math-container">$\Pr(A\mid B) = \dfrac{\Pr(A\cap B)}{\Pr(B)}$</span> by definition. That is to say, the probability of an event <span class="math-container">$A$</span> occurring given that an event <span class="math-container">$B$</span> also occurs (<em>whether that is past, present, or future... irrelevant</em>) is the ratio of the probability of both <span class="math-container">$A$</span> and <span class="math-container">$B$</span> occurring over the probability that <span class="math-container">$B$</span> occurs regardless.</p>
<p>Here, letting <span class="math-container">$A$</span> be the event that the first card is a spade, <span class="math-container">$B$</span> the event that both the second and third cards are spades, we have:</p>
<p><span class="math-container">$$\Pr(A\mid B) = \dfrac{\Pr(A\cap B)}{\Pr(B)}=\dfrac{\frac{13\cdot 12\cdot 11}{52\cdot 51\cdot 50}}{~~~~~\frac{13\cdot 12}{52\cdot 51}~~~~~} = \dfrac{11}{50}$$</span></p>
<p>In case <span class="math-container">$\Pr(B)$</span> confuses you, <a href="https://math.stackexchange.com/questions/1287393/if-you-draw-two-cards-what-is-the-probability-that-the-second-card-is-a-queen/1287396#1287396">see this related question</a> and/or again approach directly via definition. If you insist on doing this the long way, then recognize <span class="math-container">$\Pr(B) = \Pr(B\mid A)\Pr(A)+\Pr(B\mid A^c)\Pr(A^c)$</span> by law of total probability and see that it simplifies to what I claimed above.</p>
|
743,911 | <p>Consider $\frac{1}{T}\sum_{t=1}^{T}\max\{ 0,a_t\}$. Can we say whether this is greater or equal then $\max\{ 0,\frac{1}{T}\sum_{t=1}^{T}a_t\}$?</p>
| leonbloy | 312 | <p>The <a href="http://en.wikipedia.org/wiki/Jensen%27s_inequality#Finite_form" rel="nofollow">Jensen inequality</a> states that, if $f(x)$ is convex, then</p>
<p>$$ \frac{\sum_{t=1}^{T}f(x_t)}{T} \ge f\left(\frac{\sum_{t=1}^{T}x_t}{T}\right) $$</p>
<p>Now, $f(x)=\max(0,x)$ is convex. Hence</p>
<p>$$\frac{1}{T}\sum_{t=1}^T{\max\{0,x_t\}} \ge \max\left\{0,\frac{1}{T}\sum_{t=1}^T{x_t}\right\} $$</p>
|
191,187 | <p>Here I have a system as follows:</p>
<p><span class="math-container">$$\frac{dx}{dt}=a(by-x);
\frac{dy}{dt}=rx-xz;
\frac{dz}{dt}=(xy)^n-bz$$</span></p>
<p>Here <span class="math-container">$x, y$</span> and <span class="math-container">$z$</span> are positive real variables. All the parameters <span class="math-container">$a, r$</span> and <span class="math-container">$b$</span> are all positive real numbers and <span class="math-container">$n$</span> is a natural number. </p>
<p>How can I a make the phase portrait 3D of the system by varying the natural number <span class="math-container">$n$</span> from say <span class="math-container">$1$</span> to <span class="math-container">$100$</span>?</p>
| zhk | 8,538 | <pre><code>a = 1; b = 1; r = 1;
pf = ParametricNDSolveValue[{x'[t] == a*(b*y[t] - x[t]),
y'[t] == r*x[t] - x[t]*z[t], z'[t] == (x[t]*y[t])^n - b*z[t],
x[0] == x0, y[0] == x0, z[0] == x0}, {x[t], y[t], z[t]}, {t,
20}, {x0, n}];
Manipulate[ParametricPlot3D[pf[x0, n], {t, 0, 20}], {{x0, -1, "x0"}, -2, 2},
{{n, 1, "n"}, 1, 100}]
</code></pre>
|
191,187 | <p>Here I have a system as follows:</p>
<p><span class="math-container">$$\frac{dx}{dt}=a(by-x);
\frac{dy}{dt}=rx-xz;
\frac{dz}{dt}=(xy)^n-bz$$</span></p>
<p>Here <span class="math-container">$x, y$</span> and <span class="math-container">$z$</span> are positive real variables. All the parameters <span class="math-container">$a, r$</span> and <span class="math-container">$b$</span> are all positive real numbers and <span class="math-container">$n$</span> is a natural number. </p>
<p>How can I a make the phase portrait 3D of the system by varying the natural number <span class="math-container">$n$</span> from say <span class="math-container">$1$</span> to <span class="math-container">$100$</span>?</p>
| Alex Trounev | 58,388 | <p>3D phase portrait can be built by analogy with 2D using <code>SliceVectorPlot3D[]</code></p>
<pre><code>p[s1_, a1_, b1_, r1_, n1_] :=
Block[{s = s1, a = a1, b = b1, r = r1, n = n1},
v3D = {a*(b*y - x), r*x - x*z, (x*y)^n - b*z};
SliceVectorPlot3D[v3D/Norm[v3D],
s, {x, -10, 10}, {y, -10, 10}, {z, -10, 10},
PlotTheme -> "Scientific",
VectorColorFunction -> "BlueGreenYellow", VectorScale -> Small,
VectorPoints -> Fine, PlotLabel -> Row[{"n=", n}],
AxesLabel -> Automatic]]
Table[p["XStackedPlanes", 2, 1, 4, n], {n, 1, 100, 33}]
Table[p["YStackedPlanes", 2, 1, 4, n], {n, 1, 100, 33}]
Table[p["ZStackedPlanes", 2, 1, 4, n], {n, 1, 100, 33}]
</code></pre>
<p><a href="https://i.stack.imgur.com/7MHJI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7MHJI.png" alt="fig1"></a></p>
|
1,782,423 | <p>Gödel's completeness theorem says that for any first order theory $F$, the statements derivable from $F$ are precisely those that hold in all models of $F$. Thus, it is not possible to have a theorem that is "true" (in the sense that it holds in the intersection of all models of $F$) but unprovable in $F$.</p>
<p>However, Gödel's completeness theorem is not constructive. <a href="https://en.wikipedia.org/wiki/G%C3%B6del%27s_completeness_theorem#Relationship_to_the_compactness_theorem" rel="nofollow">Wikipedia</a> claims that (at least in the context of reverse mathematics) it is equivalent to the weak König's lemma, which in a constructive context is not valid, as it can be interpreted to give an effective procedure for the halting problem.</p>
<p>My question is, is it still possible for there to be "unprovable truths" in the sense that I describe above in a first order axiomatic system, given that Gödel's completeness theorem is non-constructive, and hence, given a property that holds in the intersection of all models of $F$, we may not actually be able to <em>effectively</em> prove that proposition in $F$?</p>
| Rob Arthan | 23,171 | <p>Given a sentence $\phi$ that holds in all models of a first-order theory $F$, you <strong>can</strong> effectively find a proof of $\phi$: just enumerate all proofs in $F$ until you find the one that proves $\phi$ (there is such a proof, since you are given that $\phi$ holds in all models of $F$ and hence, by the completeness theorem, you are given that $\phi$ is provable). So there are no unprovable truths in first-order logic.</p>
<p>The non-constructive nature of the <strong>proof</strong> of the completeness theorem is that it uses non-constructive methods to prove the existence of a model of a sentence that cannot be disproved. When you apply the theorem to some sentence $\phi$ that you can prove (maybe non-constructively) to be true in all models, finding the proof of $\phi$ is effective.</p>
|
2,511,061 | <p>I will prove 0=1.
we know that, From Factorial definition
zero factorial is equal to one and
one factorial is equal to one.
so,0!=1!.
factorial get cancelled both sides,
we get 0=1.
Is this right..?</p>
| Xander Henderson | 468,350 | <p>The symbol "!" denotes a function that takes as input nonnegative integers, and has an output defined by a recurrence relation. Perhaps it might be easier to replace a "!" on the right with something that looks more like traditional functional notation:
$$ \operatorname{fact} : \mathbb{Z}_{\ge 0} \to \mathbb{R}
\qquad \text{defined by} \qquad \operatorname{fact}(n) := \begin{cases} 1 & \text{if $n=0$, and}\\ n\cdot \operatorname{fact}(n-1) & \text{otherwise.}\end{cases}
$$
What you are asserting is that
$$
\operatorname{fact}(m) = \operatorname{fact}(n)
\iff m = n.
$$
This is (more or less) the definition of an injective (or one-to-one) function. If a function is one-to-one, then every <em>output</em> corresponds to exactly one input. When this happens, we can "cancel" the function on two sides of an equation. However, the factorial function is <em>not</em> one-to-one. As you have already noted,
$$
\operatorname{fact}(0) = \operatorname{fact}(1) = 1.
$$
Since this function is not one-to-one, we can't "cancel" it as you want.</p>
<p>That being said, note that $\operatorname{fact}$ can be restricted to a domain where it <em>is</em> one-to-one. Indeed, it is sufficient to throw away zero. If you define
$$ \operatorname{fact} : \mathbb{Z}_{> 0} \to \mathbb{R}
\qquad \text{defined by} \qquad \operatorname{fact}(n) := \begin{cases} 1 & \text{if $n=1$, and}\\ n\cdot \operatorname{fact}(n-1) & \text{otherwise,}\end{cases}
$$
then you get a one-to-one function that can be "canceled".</p>
|
1,740,032 | <p>In layman's terms, why would anyone ever want to change basis? Do eigenvalues have to do with changing basis? </p>
| David Wheeler | 23,285 | <p>You may have noticed multiplying matrices can be hard. By changing bases, we can make it easier.</p>
|
1,740,032 | <p>In layman's terms, why would anyone ever want to change basis? Do eigenvalues have to do with changing basis? </p>
| Community | -1 | <p>Linear algebra is all about basis-change. Matrices <em>are</em> change of basis machines !</p>
<p>When you form a linear combination $a\vec u+b\vec v+\cdots c\vec w$, you can see it as the vector $(a,b,\cdots c)$ expressed in the basis $(\vec u,\vec v,\cdots \vec w)$.</p>
<p>For the same reason, a matrix product $AB$ expresses what the matrix $A$ becomes when applied to the vectors of the matrix $B$. And think that most matrix operations can be described as products of elementary transforms (which are changes of basis involving one axis or two).</p>
<p>Looking at the resolution of a system of linear equations, you can see it as a change of basis such that the hyperplanes corresponding to the equations become parallel to the axis, so that computing their intersections becomes trivial.</p>
<p><a href="https://i.stack.imgur.com/RWI2y.png" rel="noreferrer"><img src="https://i.stack.imgur.com/RWI2y.png" alt="enter image description here"></a></p>
<p>And so on, and so on.</p>
<p>The Eigendecomposition is indeed a convenient way to characterize a change of basis, as it finds the directions that are left unchanged by the transforms (the axis that are mapped onto themselves). These special directions are indeed quite interesting as the behavior of the transform is very simple there (just a scalar product).</p>
<p>Rotations are a particularly important class of transformations, as they preserve distances and cause no deformation (they are so-called "rigid-body" transforms). The Eigendecomposition expresses that any (diagonizable) transform can be seen as a rotation that aligns the Eigendirections with the axis, followed by a (possibly anisotropic) stretching of the axis, followed by the inverse rotation.</p>
<p>The Eigendecomposition generalizes as the Singular Value decomposition. It allows the two spaces to have differing dimensions, but still decomposes a transform as a sequence of rotation/scaling/rotation. </p>
<p><a href="https://i.stack.imgur.com/7QZpl.png" rel="noreferrer"><img src="https://i.stack.imgur.com/7QZpl.png" alt="enter image description here"></a></p>
<p>The Singular Values fully characterize the resizing/deformation (and possible loss of dimensions), giving a very synthetic view on the transform.</p>
|
1,740,032 | <p>In layman's terms, why would anyone ever want to change basis? Do eigenvalues have to do with changing basis? </p>
| Benjamin Lindqvist | 96,816 | <p>Note that the Fourier transform is simply a change of basis. It is entirely possible you're actually reading <em>this very answer</em> thanks to change of basis. Check out <a href="https://en.wikipedia.org/wiki/Orthogonal_frequency-division_multiplexing" rel="noreferrer">https://en.wikipedia.org/wiki/Orthogonal_frequency-division_multiplexing</a> for instance. Edit: it might be weird to see the connection if you're just starting out with linear algebra. In signal processing among other fields, we generalize the notion of vectors in such a way that <em>functions can also be seen as vectors</em>. Starting from this idea, it might not be so weird to talk about basis changes for <em>functions</em> - which is exactly what the Fourier transform does. Indeed, this is what Taylor expansions does as well in a sense.</p>
<p>Another thing, in wireless communications with multiple transmitter/receiver antenna systemes (MIMO) it's very common to model the multiplicative noise in the channel as a linear transformation, i.e. the received signal is given by $\textbf{y} = H\textbf{x}$ (plus additive noise). I.e. there is one noisy channel for <em>every pair</em> of transmitter/receiver antennas. But by the singular value decomposition, we can find the appropriate change of basis for the transmitted signal that makes it "orthogonal to the channel" in the sense that $H$ is diagonal. This helps throughput lot.</p>
|
219,782 | <p>Hi I'm wondering if there's some workaround to get Mathematica to use time-shifting identities for Laplace and Inverse Laplace transforms. My examples are below.</p>
<pre><code>$Assumptions = Element[{t, τ}, PositiveReals];
LaplaceTransform[f[t -τ]HeavisideTheta[t-τ], t, s]
(* Output *)
(* LaplaceTransform[f[t -τ]HeavisideTheta[t-τ], t, s] *)
(* Desired Output *)
(* E^(-s τ) LaplaceTransform[f[t], t, s] *)
(* And for inverse *)
InverseLaplaceTransform[E^(-s τ) LaplaceTransform[f[t], t, s], s,t]
(* Output is again unchanged *)
(* Desired Output *)
(* f[t -τ]HeavisideTheta[t-τ] *)
</code></pre>
<p>Thanks for the help</p>
| imas145 | 61,387 | <p>First off, the code you posted won't even parse properly in Mathematica. Second, built-in Mathematica functions are capitalized and the function pattern in general uses brackets <code>[</code> and <code>]</code> instead of parenthesis, so use <code>Sin[x]</code> instead of <code>sin(x)</code>. Similarly, <span class="math-container">$\pi$</span> is implemented as <code>Pi</code>, not <code>pi</code>. </p>
<p>Then, looking at the Fourier series, your implementation is also incorrect. The fraction <span class="math-container">$\frac{1}{2}$</span> is outside the series, so you don't want that in the summand function. Also, in my opinion, having a separate summand function is unnecessary in this case, I think it's cleaner to just implement the series directly. </p>
<p>As for how to get the Fourier series and <span class="math-container">$f(t)$</span> in the same graph, use the pattern <code>Plot[{f, g}, ...]</code>. All in all, here's the code I'd write (assuming you want to hand-write the Fourier series instead of using the built-in <code>FourierSeries</code>):</p>
<pre><code>fourier[t_, Nmax_] :=
1/2 + Sum[Sin[2 π n]/(π n) Cos[n π t], {n, 1, Nmax}] +
Sum[((-1)^n - Cos[2 π n])/(π n) Sin[n π t], {n, 1, Nmax}]
f[t_] := Piecewise[{{0, 0 < t < 1}, {1, 1 < t < 2}}]
Plot[{f[t], fourier[t, 10]}, {t, 0, 2.5}]
</code></pre>
<p><em>Note: it'd be better to use the linearity of sums to combine the two <code>Sum</code>s to a single one, but I kept them separate for clarity.</em></p>
<p><a href="https://i.stack.imgur.com/5oZS3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5oZS3.png" alt="Plot"></a></p>
<p>You seem to be shaky on the basic syntax, so I'd suggest reading a getting started guide such as <a href="https://reference.wolfram.com/language/tutorial/GettingStartedOverview.html" rel="nofollow noreferrer">this</a> or <a href="https://www.wolfram.com/language/fast-introduction-for-math-students/en/get-started/" rel="nofollow noreferrer">this</a>. This way you can get to know the language more quickly and better identify the areas where you need advice.</p>
|
232,825 | <p>I have to read and process a file that looks like this:</p>
<pre><code> MASSA TMASS1
uscita da elettrodi:
0 1 0.56705 -19.98160 2.80000 -0.87939 0.66823 -0.63034 0.39513
ingresso tmass1:
0 1 0.56705 -19.13351 2.00000 -0.37791 0.66823 -0.63034 0.39513
MASSA TMASS
uscita da elettrodi:
0 1 2.10543 17.20236 -1.57617 -3.40000 -0.97477 -0.07910 0.20872
MASSA TMASS1
uscita da elettrodi:
0 7 0.00018 -18.08245 -1.30564 3.40000 0.57294 -0.75691 -0.31437
</code></pre>
<p>As you can see there are both text and numbers, numbers must be put into two lists depending on the fact that in the above line it says <code>MASSA TMASS1</code> of <code>MASSA TMASS</code> and the lines that say <code>uscita da elettrodi:</code> or <code>ingresso tmass1:</code> must be skipped.</p>
<p>How can I read it?</p>
<p><strong>EDIT</strong> For the above example I expect an output like this:</p>
<pre><code>{{0 1 0.56705 -19.98160 2.80000 -0.87939 0.66823 -0.63034 0.39513},{0 1 0.56705 -19.13351 2.00000 -0.37791 0.66823 -0.63034 0.39513},{0 7 0.00018 -18.08245 -1.30564 3.40000 0.57294 -0.75691 -0.31437}}
{0 1 2.10543 17.20236 -1.57617 -3.40000 -0.97477 -0.07910 0.20872}
</code></pre>
| xzczd | 1,871 | <p>You need <code>EmpiricalDistribution</code>:</p>
<pre><code>data = {14, 15, 16, 22, 24, 25};
weights = {1, 1, 3, 2, 2, 5};
\[ScriptCapitalD] = EmpiricalDistribution[weights -> data];
Expectation[x, x \[Distributed] \[ScriptCapitalD]]
(* 21 *)
</code></pre>
|
24,660 | <p>Pro's and cons of number line model vs color counter model</p>
<p>When teaching multiplication to elementary schoolers, the "number line model" and "color counter model" are both widely used techniques. Can somebody help me to understand some of the pro's and cons for either model? Thank you</p>
| Jasper | 667 | <p>As someone who never heard of the color counter model, I find it overly complicated.</p>
<p>I assume that the question is asked in the context of signs when multiplying integers. While both methods arrive at the correct results, I can't see any advantages of the color counter approach.</p>
<p>I'd rather fear that this method hides the fact - and more importantly, the reason - why products of integers with equal sign are positive.</p>
|
2,386,471 | <p>I have a solved question from Ross as stated below.</p>
<blockquote>
<p>Q : Suppose that each of three men at a party throws his hat into the
center of the room. The hats are first mixed up and then each man
randomly selects a hat. What is the probability that none of the three
men selects his own hat?</p>
<p>Sol: We shall solve this by first calculating the complementary
probability that at least one man selects his own hat......</p>
</blockquote>
<p>I want to start with basics and identify the sample space first.</p>
<p>$$\mathrm{Space} = \{h_1p_1, h_1p_2, h_1p_3, h_2p_1, h_2p_2, h_2p_3, h_3p_1, h_3p_2, h_3p_3\}$$</p>
<p>where $h_ip_j$ is the event of picking up a hat of person $i$ by person $j$.</p>
<p>To satisfy the condition that nobody picks his own hat, $i$ should not be equal to $j$.</p>
<p>Why the complement is "at least one man selects his own hat" and not that "all guys select their own hat". </p>
| Fred | 380,717 | <p>There is a Basis $C_1,C_2,..., C_{n^2}$ of $M_{n \times n}$ such that each $C_j$ is singular. For the identity matrix $I$ we have</p>
<p>$I=\sum_{k=0}^{n^2}s_kC_k$ for some scalars $s_k$.</p>
<p>Then $A=\sum_{k=0}^{n^2}s_kAC_k=0$</p>
<p>(symmetry was not needed)</p>
|
365,166 | <p>I have trouble coming up with combinatorial proofs. How would you justify this equality?</p>
<p><span class="math-container">$$
n\binom {n-1}{k-1} = k \binom nk
$$</span>
where <span class="math-container">$n$</span> is a positive integer and <span class="math-container">$k$</span> is an integer.</p>
| Ross Millikan | 1,827 | <p>I would write the expression for the binomial coefficients in terms of factorials and notice what happens.</p>
|
287,405 | <p>Hi have this sequence:</p>
<p>$$\sum\limits_{n=1}^\infty \frac{(-1)^n3^{n-2}}{4^n}$$</p>
<p>I understand that this is a <em>Geometric series</em> so this is what I've made to get the sum.
$$\sum\limits_{n=1}^\infty (-1)^n\frac{3^{n}\cdot 3^{-2}}{4^n}$$
$$\sum\limits_{n=1}^\infty (-1)^n\cdot 3^{-2}{(\frac{3}{4})}^n$$</p>
<p>So $a= (-1)^n\cdot 3^{-2}$ and $r=\frac{3}{4}$ and the sum is given by
$$(-1)^n\cdot 3^{-2}\cdot \frac{1}{1-\frac{3}{4}}$$</p>
<p>Solving this I'm getting the result as $\frac{4}{9}$ witch I know Is incorrect because WolframAlpha is giving me another result.</p>
<p>So were am I making the mistake?</p>
| Thomas | 26,188 | <p>You have</p>
<p>$$\begin{align}
\sum\limits_{n=1}^\infty \frac{(-1)^n3^{n-2}}{4^n}
&= \sum_{n=1}^{\infty} \frac{(-1)(-1)^{n-1}\frac{1}{3}3^{n-1}}{4\cdot 4^{n-1}} \\
&= \sum_{n=1}^{\infty} \frac{-1}{4\cdot 3}\frac{(-1)^{n-1}3^{n-1}}{4^{n-1}}\\
&= \sum_{n=1}^{\infty} \frac{-1}{12}\left(\frac{-3}{4}\right)^{n-1}
\end{align}$$
So you have $a = \frac{-1}{12}$ and $r = \frac{-3}{4}$.</p>
<p>Note the key thing here that both $a$ and $r$ are constants/numbers. They do not depend on $n$. The idea is that you rewrite your series so that it is of exactly the form
$$
\sum_{n=1}^{\infty} a r^{n-1}
$$
where again $a$ and $r$ are constants/numbers.</p>
|
287,405 | <p>Hi have this sequence:</p>
<p>$$\sum\limits_{n=1}^\infty \frac{(-1)^n3^{n-2}}{4^n}$$</p>
<p>I understand that this is a <em>Geometric series</em> so this is what I've made to get the sum.
$$\sum\limits_{n=1}^\infty (-1)^n\frac{3^{n}\cdot 3^{-2}}{4^n}$$
$$\sum\limits_{n=1}^\infty (-1)^n\cdot 3^{-2}{(\frac{3}{4})}^n$$</p>
<p>So $a= (-1)^n\cdot 3^{-2}$ and $r=\frac{3}{4}$ and the sum is given by
$$(-1)^n\cdot 3^{-2}\cdot \frac{1}{1-\frac{3}{4}}$$</p>
<p>Solving this I'm getting the result as $\frac{4}{9}$ witch I know Is incorrect because WolframAlpha is giving me another result.</p>
<p>So were am I making the mistake?</p>
| Adi Dani | 12,848 | <p>$$\sum\limits_{n=1}^\infty \frac{(-1)^n3^{n-2}}{4^n}=\sum\limits_{n=1}^\infty (-1)^n\frac{1}{9}\left(\frac{3}{4}\right)^n=\frac{1}{9}\sum\limits_{n=1}^\infty \left(-\frac{3}{4}\right)^n=
\frac{1}{9}\left(\frac{1}{1+3/4}-1\right)$$</p>
|
2,614,375 | <p>I have tried using prime factorization which is $2^2 \times 5^2 \times 11$ and found out that I'll need $37$ slices. but in the question paper which is a multiple choice question there's no such answer. the choices are $30,31,32,61,110$.</p>
| Axel Kemper | 58,610 | <p>To split the cube into $10 \times 10 \times 11$ equal parts as suggested by @bof, you need $9 + 9 + 10 = 28$ cuts. This separates the original cube in $10 + 10 + 11 = 31$ slices.</p>
<p>However, you can bring down this number to $5 + 4 + 4 = 13$ cuts by stacking the intermediate slabs and thus yielding more parts per cut.</p>
<p>The following picture illustrates how to get $11$ slices with five cuts:</p>
<p><a href="https://i.stack.imgur.com/Y2SZR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y2SZR.png" alt="enter image description here"></a></p>
<p>Similarly, $10$ slices can be obtained with four cuts:</p>
<p><a href="https://i.stack.imgur.com/l5bmH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l5bmH.png" alt="enter image description here"></a></p>
|
613,836 | <blockquote>
<p>Let $x,y,z$ be integers and $11$ divides $7x+2y-5z$. Show that $11$
divides $3x-7y+12z$.</p>
</blockquote>
<p>I know a method to solve this problem which is to write into $A(7x+2y-5z)+11(B)=C(3x-7y+12z)$, where A is any integer, B is any integer expression, and C is any integer coprime with $11$.</p>
<p>I have tried a few trials for example $(7x+2y-5z)+ 11(x...)=6(3x-7y+12z)$, but it doesn't seem to work. My question is are there any tricks or algorithms for quicker way besides trials and errors? Such as by observing some hidden hints or etc?</p>
<p>I am always weak at this type of problems where we need to make smart guess or gain some insight from a pool of possibilities? Any help will be greatly appreciated. And maybe some tips to solve these types of problems.</p>
<p>Thanks very much!</p>
| Bill Dubuque | 242 | <p><strong>Hint</strong> $\ $ Scale the first sum so that, mod $11,\,$ its $\,y$ coefficient $\,\color{#0a0}2\,$ becomes that of the second sum, i.e. $\,\color{#c00}{-7\equiv 4}.\,$ So we need to multiply by $\rm\,\color{orange}{by\ 2}\,$ to scale from $\,\color{#0a0}2$ to $\,\color{#c00}4.\,$ Doing so we obtain</p>
<p>$$\begin{eqnarray}{\rm mod}\ 11\!:\,\ 0&\equiv&\ \, 7x\!+\!\color{#0a0}2y\!-\!5z \\ \overset{\color{orange}{\times\ 2}}\Rightarrow\ 0&\equiv& 14x\!+\!4y\!-\!10z \\ &\overset{\phantom{2}}\equiv&\ \, 3x\!\color{#c00}{-\!7}y\!+\!12z \end{eqnarray}\qquad\qquad $$</p>
<p><strong>Remark</strong> $\ $ The converse holds also since $\,11\mid 2n\iff 11\mid n$. I picked the smallest coefficient $\,\color{#0a0}2\,$ because generally that will be easiest to divide by (or invert). In fact, it is very easy to divide by $2$ mod odd $\,m,\,$ since one of $\ n\equiv n+m\,$ is even; e.g. as above $\,\color{#c00} -7\equiv -7+11 = 4\,$ is even, hence $\rm\,\color{#c00}{-7}/\color{#0a0}2 \equiv 4/2 \equiv \color{orange}2.$</p>
|
2,091,251 | <p>I want to be able to check if a function f is even, odd, or neither using Maple's symbolic math. Unfortunately I don't get a boolean return of 'true' on a function I know is even.</p>
<pre><code>g:=abs(x)/x^2
evalb(g(x)=g(-x))
false
</code></pre>
<p>Since my function is even, that is a problem. It turns out that my expression is multiplying g by x or -x instead of inputing/composing them.</p>
<p>How can I get Maple to check the parity of my function?</p>
| saforrest | 266,106 | <p>All the right ideas are already in other answers/comments, but I'll post this as an answer to hopefully provide some more detail.</p>
<p>The way people normally use 'function' in a programming context is a construct which takes some number of parameters and executes one or more statements making use of these parameters.</p>
<p>In that sense, your definition of <strong>g</strong> isn't a function, it's an expression. The instances of 'x' inside aren't function parameters, but unassigned symbols. (I should say however, at the risk of confusing you further, that Maple's documentation and type system frequently uses the word 'function' to refer to expressions, e.g. in the definition of the Maple <a href="https://www.maplesoft.com/support/help/Maple/view.aspx?path=type/function" rel="nofollow noreferrer">type function</a>. But don't worry about that right now.)</p>
<p>The easiest way to do what you want using your expression <strong>g</strong> is what you already found:</p>
<pre><code>g:=abs(x)/x^2;
evalb( g = eval(g, x=-x) );
</code></pre>
<p>Going back to the question about a 'function', the thing in Maple which corresponds most naturally to that idea is a 'procedure'. Here are two ways to define a procedure corresponding to <strong>g</strong> and perform the test you've already done. I'll call this function 'h':</p>
<p>Method #1:</p>
<pre><code>h := x -> abs(x)/x^2;
evalb( h(x) = h(-x) );
</code></pre>
<p>Method #2:</p>
<pre><code>h := unapply( abs(x)/x^2, x );
evalb( h(x) = h(-x ) );
</code></pre>
<p>Hope that helps.</p>
|
77,246 | <p>What is the "right" analog in the orbifold case of a singular homology of a topological space?
We can not just take the homology of the underlying space, because it does not contain much information.
For example, is there any kind homology of orbifolds such that the first homology group is the abelianization of the fundamental group of the orbifold? And such that an $n$-dim orbifold will have all homology group higher than $n$-dim equal to zero? And if there is such a homology, will there be Poincare duality in the orbifold case???</p>
<p>Thanks very much! </p>
| Michael Joyce | 16,002 | <p>I do not know the current state of the art, but I can point you to the paper of <a href="https://arxiv.org/abs/math/0004129" rel="nofollow noreferrer">Chen and Ruan</a> which should have everything you are asking for.</p>
|
77,246 | <p>What is the "right" analog in the orbifold case of a singular homology of a topological space?
We can not just take the homology of the underlying space, because it does not contain much information.
For example, is there any kind homology of orbifolds such that the first homology group is the abelianization of the fundamental group of the orbifold? And such that an $n$-dim orbifold will have all homology group higher than $n$-dim equal to zero? And if there is such a homology, will there be Poincare duality in the orbifold case???</p>
<p>Thanks very much! </p>
| john mangual | 1,358 | <p>A math definition of orbifold Euler characteristic appears in <a href="https://eudml.org/doc/164637" rel="nofollow noreferrer">On the Euler Number of an Orbifold</a>. If there is a group acting on a manifold <span class="math-container">$X / G$</span> we could try to write
\[ \chi(X/G) = \frac{1}{|G|} \sum_{g \in G} \chi(X^g) \]
where <span class="math-container">$X^g$</span> is the fixed-point set of <span class="math-container">$g \in G$</span>. Instead it seems to be better to sum over conjugacy classes <span class="math-container">$[g]$</span> of <span class="math-container">$G$</span>.
\[ \chi(X,G) = \sum_{[g]} \chi(X/C(g)) \]
where <span class="math-container">$C(g) = \{ h: hgh^{-1}=g\}$</span> is the <em><a href="https://en.wikipedia.org/wiki/Centralizer_and_normalizer" rel="nofollow noreferrer">centralizer</a></em> of <span class="math-container">$g \in G$</span>. For any group action <span class="math-container">$|[g]|\cdot |C(g)| = G$</span>.</p>
<p>This definition was motivated by some physicists in the last 1980's and by the mid 90's this idea was extended to homology. See <a href="https://arxiv.org/abs/hep-th/9408074" rel="nofollow noreferrer">A Strong Coupling Test of S-Duality</a>
\[ H^<em>(X/G) = \bigoplus_g H^</em>(X^g)^{C(g)} \]</p>
<p>The direct sum of the centralizer-invariant part <span class="math-container">$[\cdot] ^{C(g)}$</span> of the cohomology classes <span class="math-container">$H^*(\cdot)$</span> of fixed point sets <span class="math-container">$X^g$</span>. As <span class="math-container">$g$</span> runs over the conjugacy classes of <span class="math-container">$G$</span>.</p>
<p>This was used to find the Euler characteristic of the Hilbert scheme of points on a non-singular space (originally found by Lothar Göttsche) </p>
<p>\[ \sum_{n=0}^\infty q^n \chi(X^{[n]}) = \frac{1}{\prod_{i=1}^\infty (1-q^n)^{\chi(X)}}\]</p>
<p>So there's connection to the Dedekind <a href="https://en.wikipedia.org/wiki/Dedekind_eta_function" rel="nofollow noreferrer">η-function</a>. This generating function might be different if <span class="math-container">$X$</span> itself is singular.</p>
|
2,130,823 | <blockquote>
<p>Is it possible for some integer $n>1$ that $2^n-1\mid 3^n-1$ ?</p>
</blockquote>
<p>I have tried many things, but nothing worked.</p>
| B. S. | 231,386 | <p>When <span class="math-container">$2^n-1$</span> is a Mersenne prime,this can be resolved ( although this isn't very helpful, because we only know of 49 Mersenne primes and we don't know if they are finitely many.However, it sure is nice to know that <span class="math-container">$ 2^{74,207,281} − 1$</span> does not divide <span class="math-container">$3^{74,207,281} − 1$</span>).</p>
<p>Let <span class="math-container">$q = 2^p-1$</span> be prime, therefore <span class="math-container">$F_q$</span> is a field. We know that polynomials of degree k must have at most k solutions in a field.Applying this to <span class="math-container">$x^p-1$</span>, which has the solution 2 mod q, we see that this must have at most p solutions.But the set <span class="math-container">$A=(1,2,...,2^{p-1})$</span> obviously consists of different solutions, therefore it is the complete solution set. Since <span class="math-container">$q|3^p-1$</span> , we see that 3 is a solution, therefore <span class="math-container">$3 \in A $</span>, but all the elements of the set <span class="math-container">$A-3$</span> have modulus less than q (obviously) and are different from 0, so no such solution may exist.</p>
<p>When n is a prime, but <span class="math-container">$2^n-1$</span> is not necessarily a Mersenne prime, we can employ the same reasoning for a prime divisor <span class="math-container">$q$</span> of <span class="math-container">$2^n-1$</span> :3 must be congruent to some power of 2 modulo q. Therefore q divides a number of the form <span class="math-container">$2^i-3$</span>.I don't know what the prime divisors of the sequence <span class="math-container">$2^i-3$</span> are, but a very weak corollary is this : either <span class="math-container">$3$</span> or <span class="math-container">$6$</span> is a quadratic residue mod q, therefore, by toying with quadratic reciprocity a bit, we get this : <span class="math-container">$q \equiv \pm 1, \pm 5, \pm 13\pmod{24}$</span>.So when n is prime, the prime divisors of <span class="math-container">$2^n-1$</span> must be of this specific form (note that this is a very weak corollary).</p>
|
4,549,781 | <p>suppose we have a convex continuous function from<span class="math-container">$\mathbb{R}$</span> to <span class="math-container">$\mathbb{R^+}$</span>
<span class="math-container">$f:\mathbb{R}\to\mathbb{R^+}$</span> such that<br />
<span class="math-container">$f(-x)=f(x),lim_{x\to 0}\frac{f(x)}{x}=0 \text{ and } lim_{x\to \infty}\frac{f(x)}{x}=\infty$</span> and <br />
define <span class="math-container">$f_{1}(x)$</span>=<span class="math-container">$e^{(\frac{-1}{x^2})}f(x)$</span><br />
My question is can we define <span class="math-container">$f(x)$</span> such that<br />
<span class="math-container">$f_1(x) =$</span>
<span class="math-container">$\begin{cases}
0 & :x\in[-k,k],\text{where}\hspace{0.2cm} k\in \mathbb{R}\\
\text{it will increase } & : x\in(-\infty,-k)\cup(k,\infty)
\end{cases} $</span>
<br />
I hope I have written my question so that people can understand what I want to say, if there is any mistake please let me know.All I want is a function which is zero in a closed interval and then it increases and the final plot is like convex function,if I keep on changing the value of <span class="math-container">$k$</span> it should be zero in all those <span class="math-container">$[-k,k]$</span> for every <span class="math-container">$k\in \mathbb{R}$</span></p>
| Gregory | 197,701 | <p><a href="https://en.wikipedia.org/wiki/Gershgorin_circle_theorem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Gershgorin_circle_theorem</a> may help you here.
Note that it implies that eigenvalues <span class="math-container">$\lambda$</span> satisfy
<span class="math-container">$$|\lambda - a_{ii}| \le \sum_{k\neq i} |a_{ki}| = \sum_{k\neq i}a_{ki} = - a_{ii}.$$</span>
This proves that all eigenvalues are non-positive. Not sure at the moment the best way to prove that one and only one eigenvalue will be <span class="math-container">$0$</span>.</p>
<p><strong>EDIT:</strong></p>
<p>However, it seems clear that at least one will be zero since the elements satisfy <span class="math-container">$\sum_{k=1}^N a_{ki} = 0$</span>. The determinant is clearly going to be <span class="math-container">$0$</span> since an entire row can be zeroed out. So at least one eigenvalue is <span class="math-container">$0$</span>.</p>
<p><strong>Second EDIT:</strong></p>
<p>As pointed out by user1551, the eigenvalues can be complex so the original simplification is incorrect. The result can be explicitly shown. I leave it to OP to verify that the real part of <span class="math-container">$\lambda$</span> is non-positive.</p>
|
269,696 | <p>I am given that $\sum\limits_{n=1}^\infty a_n$ is convergent. </p>
<p>I need to determine whether $\sum\limits_{n=1}^\infty (a_n)^\frac{1}{3}\;$ and $\;\sum\limits_{n=1}^\infty (a_n)^2\;$ are also convergent.</p>
<p>Imagine that $a_n = \dfrac{1}{n^4}.\;$ I believe that this is convergent because it's converging to $0$.</p>
<p>Following the same thought, if $\displaystyle a_n = \left(\frac{1}{n^4}\right)^2,\;$ it's convergent because it's converging to $0$.</p>
<p>Am I doing this correctly or there is some other way to prove this?</p>
| Nameless | 28,087 | <p>Youu want to decide whether or not the statements are true for any sequence $(a_n)$. So you either prove them, or provide a counterexample.</p>
<p>What you have done is wrong on many counts. You took a sequence $a_n$ and said (I believe) $\sum_{n=1}^{\infty}a_n$ converges because $a_n\to 0$. This is wrong as the harmonic series $\sum_{n=1}^{\infty}\frac1n$ diverges although $\frac1n\to 0$. </p>
<p>For an easier counterexample take $a_n=\frac{1}{n^3}$ for the first problem. Indeed,
$\sum_{n=1}^{\infty}\frac1{n^3}$ converges as
$$0\le \frac1{n^3}\le \frac1{n^2}$$
and the second series converges (why?). You can also use the integral test or the Cauchy Condensation test. What is
$$\sum_{n=1}^{\infty}\left(\frac1{n^3}\right)^{\frac13}?$$</p>
<p>The second problem is trickier: You want $\sum_{n=1}^{\infty}a_n$ to converge but not $\sum_{n=1}^{\infty}a^2_n$. This creates a small problem: </p>
<p>Because $a_n\to 0$, for large $n$, $\left|a_n\right|\ge a^2_n$ (why?) which should mean that the
second series would converge. Not so much, if we choose $a_n$ to alternate signs. Then $\sum_{n=1}^{\infty}a_n$ would converge but not $\sum_{n=1}^{\infty}a^2_n$ as in the second series the terms would only be added, while in the first series
terms are also subtracted (alternating signs). An example is $a_n=\frac{(-1)^n}{\sqrt{n}}$</p>
|
2,649,557 | <p>I've been studying topology recently, and I've gotten to the part of the book that deals with quotient spaces. For the most part, it's fairly clear, but one thing that has been confusing me a bit is how the unit circle is represented.</p>
<p>Sometimes $\mathbf S^1$ is denoted as $\{(x,y)\in\mathbb R^2|(x-a)^2+(y-b)^2=1\}$ while other times it's denoted as $\{z\in\mathbb C| \; |z-w|=1\}$. I know these are both representations of the same thing, but I'm not sure whether to consider $\mathbf S^1$ as a subset of $\mathbb R^2$ or as a subset of $\mathbb C$, or if it even matters.</p>
| user284331 | 284,331 | <p>$z$ as a complex has the form $z=x+yi$ for real numbers $x,y$ and $z$ is identified with $(x,y)$ as a coordinate in ${\bf{R}}^{2}$. And the topology in ${\bf{C}}$ is the Euclidean topology for ${\bf{R}}^{2}$. So if $w=(a,b)$, then $|z-w|^{2}=(x-a)^{2}+(y-b)^{2}$, and hence $|z-w|=1$ if and only if $(x-a)^{2}+(y-b)^{2}=1$.</p>
|
1,649,997 | <p>I'm having a hard time coming up with two unbounded sequences where their difference yields $0$ when $n\rightarrow\infty$. Any ideas?</p>
| cpiegore | 268,070 | <p>Take $a_n = f(n)$ where $f(n)$ is a monotone function and let $b_n = f(n) + \frac{1}{n^m}$ for any $m > 0$.</p>
|
2,468,781 | <p>The law of Universal Generalization states that: </p>
<p>P(c)</p>
<p>(x) P(x) </p>
<p>Now, I understand that this works only if c is <em>any random element</em> from the universe. Such arbitrary selection makes this rule mathematically valid. However, I do not understand how it holds true in practical examples. </p>
<p>For instance, if I randomly pick out a number from the set of the integers 1 to 10 and it turns out to be a prime number, I can infer using Universal Generalization that all the numbers in the set are prime. But this would be a fallacious conclusion. How then, can the law be used in practice? </p>
| 5xum | 112,884 | <p>It's not "I pick a random $c$ and if it's true for $c$, then it's true for all $x$"</p>
<p>It's "If I know it's true for $c$ even if I don't know which $c$ I have, then it's true for all $x$".</p>
<hr>
<p>In other words, your example should be:</p>
<blockquote>
<p>If you tell me you will give me a random number from the set of integers, and I can already be certain that the number will be a prime number, then I can infer that all numbers of the set are prime.</p>
</blockquote>
<p>And this, of course, is not true, since if all you know is that you will get an integer smaller than $100$, you can't conclude you will get a prime.</p>
|
126,897 | <p>It is an exercise in a book on discrete mathematics.How to prove that in the decimal expansion of the quotient of two integers, eventually some block of digits repeats.
For example:
$\frac { 1 }{ 6 } =0.166\dot { 6 } \ldots$ and $\frac { 217 }{ 660 } =0.328787\dot { 8 } \dot { 7 } \ldots$</p>
<p>How to think of this?I just can't find the point to use the Pigeonhole Principle.
Thanks for your help!</p>
| Raymond Manzoni | 21,783 | <p>Let's proceed to the actual division :</p>
<p>$
\begin{array} {r|l}
\boxed{217}\hphantom{000\;} & 660\\
\hline
2170\hphantom{000} & 0.3287\\
-1980\hphantom{000} & \\
\boxed{190}\hphantom{00\;} & \\
1900\hphantom{00} & \\
-1320\hphantom{00} & \\
\boxed{580}\hphantom{0\;} & \\
5800\hphantom{0} & \\
-5280\hphantom{0} & \\
\boxed{520}\hphantom{\;} & \\
5200 & \\
-4620 & \\
\boxed{580} & \\
\end{array}
$</p>
<p>The important point is that the remainders must be smaller than the quotient $660$ so that, after a finite number of operations, you must get $0$ or a remainder you got before.<br>
What will the next digit of the quotient be? And the next remainder?</p>
<p>Hoping it clarified,</p>
|
294,895 | <p>Let $T: X \longrightarrow Y$ be a continuous linear map between two Banach spaces.</p>
<p>When is $\operatorname{Ran}(T)$ a closed subspace?</p>
<p>What theorems are there? </p>
<p>Thanks :) </p>
| Davide Giraudo | 9,849 | <p>Consider $X=Y:=\ell^\infty$, the normed space of bounded sequences. Let $T(x)(n):=\frac{x(n)}{n^2}$; this gives a linear continuous operator. Let $y\in T(X)$, then $\{n^2y(n)\}$ is bounded, and the converse holds. So
$$T(X)=\{y,\sup_n|n^2y(n)|<\infty\}.$$
Let $x^{(n)}:=\sum_{j=1}^nj^{-1}e(j)$; it converges in $\ell^\infty$ to the sequence $x=\{n^{-1}\}$. Furthermore, $x^{(n)}\in T(X)$. But $x\notin T(X)$.</p>
|
385,404 | <p>So, I have this question which is still troubling me:</p>
<blockquote>
<p>Find the value of $k$ such that the equation $2x^3 + 3x^2 + kx - 48 = 0$ has two solutions equal in value but opposite in sign.</p>
</blockquote>
<p>I've had numerous attempts at this, such as using simultaneous equations and the factor theorem, but there always seems to be a problem. I'm sure I'm missing an important step here. Any clearing up would be great, thanks!</p>
| Easy | 60,079 | <p>Let $a$ be one of the pair roots. Then $$2a^3 + 3a^2 + ka - 48 =0=2(-a)^3 + 3(-a)^2 + k(-a) - 48\Rightarrow(2a^2+k)a=0\Rightarrow k=-2a^2$$
Substitute back this to the equation one gets $$2a^3+3a^2+(-2a^2)a-48=0$$which gives $a^2=16\Rightarrow k=-2a^2=-32$.</p>
|
375,910 | <p>Okay so basically I want to know if you can solve this log equation without the use of u substitution: </p>
<p>$${\log_4{\log_3{x}}} = 1$$</p>
<p>I believe that u substitution is the only way to solve this problem, but please prove me wrong if theres another way to do so.</p>
| iostream007 | 76,954 | <p>There is basic identity of log is: $\log_ab=n\implies a^n=b $</p>
<p>so in your question : $\log_4\log_3x=1\,$, we put $\log_3x=u$</p>
<p>$$\log_4u=1\implies u=4^1\implies u=4\implies\log_3x=4\implies x=3^4\implies x=81$$ </p>
|
384,471 | <p>I'm attempting to evaluate the limit</p>
<p>$\lim_{x\rightarrow\infty}\frac{1}{\sqrt{x^{2}-4x+1}-x+2}$</p>
<p>I got it reduced to the following</p>
<p>$\lim_{x\rightarrow\infty}\frac{\sqrt{\frac{1}{\left(x-2\right)^{2}}-\frac{3}{\left(x-2\right)^{4}}}+1}{1-\frac{3}{\left(x-2\right)^{2}}-1}$</p>
<p>But putting in $\infty$ I get $\frac{1}{0}$ and, what's worse, Mathematica tells me the limit is equal to $-\infty$. Where am I going wrong?</p>
| lab bhattacharjee | 33,337 | <p>$$F=\lim_{x\rightarrow\infty}\frac{1}{\sqrt{x^{2}-4x+1}-x+2}$$</p>
<p>$$=\lim_{x\rightarrow\infty}\frac{1}{\sqrt{(x-2)^2-3}-x+2}$$</p>
<p>Let us put $x-2=\sqrt3\csc2\theta$</p>
<p>$x\to\infty\implies \theta\to0$</p>
<p>So, $$F=\lim_{\theta\to0}\frac1{\sqrt3\cot2\theta-(\sqrt3\csc2\theta+2)+2}$$</p>
<p>$$=-\frac1{\sqrt3}\lim_{\theta\to0}\frac{\sin2\theta}{1-\cos2\theta}$$</p>
<p>$$=-\frac1{\sqrt3}\lim_{\theta\to0}\frac{2\sin\theta\cos\theta}{2\sin^2\theta}$$ (using $\sin2\theta=2\sin\theta\cos\theta,\cos2\theta=1-2\sin^2\theta$)</p>
<p>$$=-\frac1{\sqrt3}\lim_{\theta\to0}\cot\theta$$ as $\theta\to0\implies \sin\theta\to0\implies \sin\theta\ne0$</p>
<p>What is $\cot0?$</p>
|
176,054 | <p>I have a friend who wants to study something applied to neurosciences. He is going to begin his grad studies in mathematics.
He asked me which areas of mathematics could be applied to neurosciences.
Since I don't know the answer, I thought mathoverflow would be the right place to ask.
I mean, there are many areas of mathematics that could be applied to neurosciences. But the question is the following: which are the fields that have already been applied to neurosciences? Are there areas related to dynamical systems, stochastic process, probability, topology, analysis, PDE or algebra applied to neurosciences?
Articles are welcome. </p>
<p>Thank you in advance</p>
| André Henriques | 5,690 | <p>My wife is a neuroscientist. I can tell you what she uses:</p>
<p>$\bullet$ a LOT of statistics.<br>
$\bullet$ signal processing (such as wavelet transform).<br>
$\bullet$ some baysian probability theory.</p>
|
6,661 | <p>Is there a simple explanation of what the Laplace transformations do exactly and how they work? Reading my math book has left me in a foggy haze of proofs that I don't completely understand. I'm looking for an explanation in layman's terms so that I understand what it is doing as I make these seemingly magical transformations.</p>
<p>I searched the site and closest to an answer was <a href="https://math.stackexchange.com/questions/954/inverse-of-laplace-transform">this</a>. However, it is too complicated for me.</p>
| Agustí Roig | 664 | <p>There are beautiful video lessons at <a href="http://ocw.mit.edu/index.htm">MIT Opencourseware</a>. I'm particularly in love with <a href="http://ocw.mit.edu/courses/mathematics/18-03-differential-equations-spring-2010/video-lectures/lecture-19-introduction-to-the-laplace-transform/">this</a> presentation of the Laplace transform.</p>
|
6,661 | <p>Is there a simple explanation of what the Laplace transformations do exactly and how they work? Reading my math book has left me in a foggy haze of proofs that I don't completely understand. I'm looking for an explanation in layman's terms so that I understand what it is doing as I make these seemingly magical transformations.</p>
<p>I searched the site and closest to an answer was <a href="https://math.stackexchange.com/questions/954/inverse-of-laplace-transform">this</a>. However, it is too complicated for me.</p>
| Ron Gordon | 53,268 | <p>A Laplace transform is useful for turning (constant coefficient) ordinary differential equations into algebraic equations, and partial differential equations into ordinary differential equations (though I rarely see these daisy chained together). </p>
<p>Let's say that you have an ordinary DE of the form</p>
<p>$$a y''(t) + b y'(t) + c y(t) = f(t) \quad t \gt 0$$
$$y(0)=y_0$$
$$y'(0)=p_0$$</p>
<p>Then the above equation becomes</p>
<p>$$(a s^2+b s+c) \hat{y}(s) - [a y_0 s + (a p_0 + b y_0)] = \hat{f}(s)$$</p>
<p>where $\hat{y}$ and $\hat{f}$ are Laplace transforms of $y$ and $f$, respectively. Note that we have converted the ODE into an algebraic equation in which we solve for $\hat{y}(s)$. We find $y(t)$ by inverse Laplace transformation, which is usually accomplished through tables, or contour integration if there is facility with that. Note that the initial conditions are built right into the equation we solve.</p>
<p>There are numerous examples for using Laplace transforms in PDE's. <a href="https://math.stackexchange.com/questions/480235/inverse-laplace-transform-of-bar-p-d-frack-0-sqrts-r-dsk-0-sqrts/481946#481946">Here is a case</a> I did in which I used LT's to solve the heat equation in two dimensions.</p>
|
2,292,711 | <p>Show that the vector field $F(x,y)=(2x-x^5-xy^4,y-y^3-x^2y)$ defined in $R^2$ does not have periodic orbits; the Bendixson criterion is not useful.</p>
| Robert Israel | 8,508 | <p>Since the $x$ and $y$ axes are invariant, any periodic orbit must be in one of the four quadrants. A periodic orbit must have a stationary point in its interior, but every stationary point is on an axis.</p>
|
3,338,597 | <p>I am struggling to find the cases for which <span class="math-container">$I(X;Y|Z)>I(X;Y)$</span>. The only mathematical example I could find for such a case is the following:
<span class="math-container">$$
I(X;Y) + I(X;Z|Y) = I(X;Z) + I(X;Y|Z).
$$</span>
This makes sense since they are both definitions of <span class="math-container">$I(X;Y,Z)$</span>. So, if we assume <span class="math-container">$X$</span> and <span class="math-container">$Z$</span> to be independent such that <span class="math-container">$I(X;Z) = 0$</span>, then
<span class="math-container">$$
I(X;Y|Z) - I(X;Y) = I(X;Z|Y) \geq0
$$</span>
such that
<span class="math-container">$$
I(X;Y|Z) \geq I(X;Y).
$$</span>
The issue I have with this example is that if we considered <span class="math-container">$X$</span> and <span class="math-container">$Z$</span> to be independent, I also would expect <span class="math-container">$I(X;Z|Y)$</span> to be equal to <span class="math-container">$0$</span> and not greater than <span class="math-container">$0$</span>. If it was <span class="math-container">$0$</span> then the MI and CMI would be equal which I can understand, but I do not get how this can be achieved and how to interpret it properly. In other words, how can conditioning a third random variable increase the mutual information between two other random variables mathematically and how can this be interpreted?</p>
| leonbloy | 312 | <p>The classical example is : let <span class="math-container">$X,Y$</span> be independent fair Bernoulli variables (take values <span class="math-container">$\{0,1\}$</span> with equal probability), and let <span class="math-container">$Z=X+Y \pmod 2$</span> (in boolean logic: <span class="math-container">$Z= X \oplus Y$</span> where <span class="math-container">$\oplus$</span> is an XOR operator).</p>
<p>It's easy to see that all <span class="math-container">$X,Y,Z$</span> have 1 bit of entropy, and that they are all pairwise independent (<span class="math-container">$P(Z|X)=P(Z)$</span>) hence, <span class="math-container">$I(X;Y)=I(X;Z)=0$</span></p>
<p>However, it's also obvious that knowing two variables let you know the other, so for example <span class="math-container">$H(X | Y,Z)=0$</span> and</p>
<p><span class="math-container">$$I(X;Y|Z) = H(X|Z) - H(X | Y,Z)=1 - 0 = 1 $$</span></p>
<p>The way to understand <span class="math-container">$I(X;Y|Z) > I(X;Y)$</span> , in this example, is: <span class="math-container">$Y$</span>, by itself, does not give us any information gain about <span class="math-container">$X$</span>. However, if we are given <span class="math-container">$Z$</span> (condition!), things change: <span class="math-container">$Y$</span> now gives us a lot of information,</p>
|
3,338,597 | <p>I am struggling to find the cases for which <span class="math-container">$I(X;Y|Z)>I(X;Y)$</span>. The only mathematical example I could find for such a case is the following:
<span class="math-container">$$
I(X;Y) + I(X;Z|Y) = I(X;Z) + I(X;Y|Z).
$$</span>
This makes sense since they are both definitions of <span class="math-container">$I(X;Y,Z)$</span>. So, if we assume <span class="math-container">$X$</span> and <span class="math-container">$Z$</span> to be independent such that <span class="math-container">$I(X;Z) = 0$</span>, then
<span class="math-container">$$
I(X;Y|Z) - I(X;Y) = I(X;Z|Y) \geq0
$$</span>
such that
<span class="math-container">$$
I(X;Y|Z) \geq I(X;Y).
$$</span>
The issue I have with this example is that if we considered <span class="math-container">$X$</span> and <span class="math-container">$Z$</span> to be independent, I also would expect <span class="math-container">$I(X;Z|Y)$</span> to be equal to <span class="math-container">$0$</span> and not greater than <span class="math-container">$0$</span>. If it was <span class="math-container">$0$</span> then the MI and CMI would be equal which I can understand, but I do not get how this can be achieved and how to interpret it properly. In other words, how can conditioning a third random variable increase the mutual information between two other random variables mathematically and how can this be interpreted?</p>
| Cesare | 388,998 | <p>In terms of interpretation, <span class="math-container">$I(X;Y|Z)>I(X;Y)$</span> is an indication that at least to some degree <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> convey synergistic information about <span class="math-container">$Z$</span> (even though a certain degree of redundancy could still be present). <span class="math-container">$I(X;Y)-I(X;Y|Z)$</span> can be decomposed as the difference <span class="math-container">$R-S$</span>, where <span class="math-container">$R$</span> denotes the redundant component and <span class="math-container">$S$</span> the synergistic one. Only when <span class="math-container">$R-S<0$</span> can you conclude that there has to be some amount of synergy in the way <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> encode <span class="math-container">$Z$</span>.</p>
|
142,734 | <p>What are some books that discuss elementary mathematical topics ('school mathematics'), like arithmetic, basic non-abstract algebra, plane & solid geometry, trigonometry, etc, in an insightful way? I'm thinking of books like Klein's <em>Elementary Mathematics from an Advanced Standpoint</em> and the books of the Gelfand Correspondence School - school-level books with a university ethos.</p>
| roy smith | 9,449 | <p>Euclid's elements. i find it much more useful than Klein's books, but that may mean i misunderstand the question. indeed after many years of perusing them, i find Klein's "from an advanced standpoint" books more of a polemic than a useful text. Euclid on the other hand introduces many of the main ideas of modern mathematics.</p>
|
142,734 | <p>What are some books that discuss elementary mathematical topics ('school mathematics'), like arithmetic, basic non-abstract algebra, plane & solid geometry, trigonometry, etc, in an insightful way? I'm thinking of books like Klein's <em>Elementary Mathematics from an Advanced Standpoint</em> and the books of the Gelfand Correspondence School - school-level books with a university ethos.</p>
| Fred Daniel Kline | 16,888 | <p><a href="http://rads.stackoverflow.com/amzn/click/0716750473" rel="nofollow">Mathematics The Science of Patterns</a> by Keith Devlin.</p>
|
269,769 | <p>Is the following true always for a matrix norm </p>
<p>$$\lVert AB\rVert \leqslant \lVert A\rVert \cdot \lVert B\rVert \text{ ?}$$</p>
<p>Related to this
given $r$ is positive constant, $H$ is symmetric positive definite
is the following true :</p>
<p>$$\lVert (rI - H)(rI + H)^{-1}\rVert < 1 $$</p>
<p>or</p>
<p>$(rI - H)(rI + H)^{-1}$ has the spectral radius less than $1$ certainly?</p>
<p>Thank you.</p>
| Hagen von Eitzen | 39,174 | <p>An <a href="http://en.wikipedia.org/wiki/Matrix_norm" rel="nofollow">online encyclopedia</a> states hat "some (but not all) matrix norms satisfy" this inequality, though some authors include this as part of the definition. SO it depends.</p>
|
1,858,218 | <p>I often come across students who are confused by the idea that the complex unit, $i$, is defined as $i^2 = -1$. Since we are using the complex numbers in an engineering course, we use the complex numbers to encode information about a sinusoid of a given frequency, where the argument is the phase of the sinusoid with respect to a reference and the magnitude is the amplitude. I've started to explain the need for the complex numbers by re-defining the complex unit as: $i^5 = i$, which helps to explain that the complex numbers inherently encode periodic information, hence why they were so useful for encoding sinusoids.</p>
<p>My question is this: conceptually this helps the students to better understand why we use the complex numbers (they tend to hate the fact that we use the term "imaginary number"). However, this isn't the original definition of $i$. Is there anything seriously wrong with approaching the concepts with my definition? Is there anything special about the fact that $i^2 = -1$ <em>that is not also present in my definition</em>?</p>
| Peter G. Chang | 339,525 | <p>Well, for one thing any one of $x=1,-1,i,-i$ satisfies $x^5 = x$, so $x^5 = x$ cannot be the definition of $i$ unless you append the condition that $k = 5$ is the smallest positive integer power greater than $1$ such that $x^k = x$.</p>
|
4,412,175 | <p>I wonder and tried to google it, but I am not sure what to google it, how to solve non linear equations where equations are equal between each other. I am able to write a specific algorithm for two equations but not dynamically for N equations. I will show the example of three (how my equations approximately looks like):</p>
<p><a href="https://i.stack.imgur.com/NbloI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NbloI.png" alt="enter image description here" /></a></p>
<p>C1, C2, C3, X - are unknows, but in the end I do not need to know result of X.</p>
<p>It can be interpreted like this (last equation C1 + C2 + C3 = 1 is not included here):</p>
<p><a href="https://i.stack.imgur.com/OAJz8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OAJz8.png" alt="enter image description here" /></a></p>
<p>Please, don't try to solve this, I am not sure if these equations have results. I just randomly typed coefficients. But this is how my equations can looks like. Only with different coefficients. I tried to calculated it with only two unknows and I have got quadratic equation in the end so with three unknowns there will be cubic in the end. With N unknows there will be polynomial equation with degree N. Also, I have to say, result do not have to be with 100% accuracy. I am not sure if its help somehow or not.</p>
<p>I found on google that maybe using iterative method could help. I look at few iterative methods but I am still not sure how to use it on this kind of problem. I also found, that non linear equation can by linearize. Maybe that would be a option but I am not sure how to do it here.</p>
| Glorious Nathalie | 948,761 | <p>From <span class="math-container">$C_1 + C_2 + C_3 = 1 $</span>, you get <span class="math-container">$C_3 = 1 - C_1 - C_2 $</span></p>
<p>Substitute that in your equations, you get</p>
<p><span class="math-container">$\dfrac{ 4 C_1 + 8 C_2 + 16 (1 - C_1 - C_2) }{2 C_1} = \dfrac{ -12 C_1 - 8 C_2 + 16} {2 C_1} = \dfrac{ - 6 C_1 - 4 C_2 + 8} {C_1} $</span></p>
<p>and</p>
<p><span class="math-container">$\dfrac{ 9 C_1 + 27 C_2 + 81(1 - C_1 - C_2) }{3 C_2} = \dfrac{ - 24 C_1 - 18 C_2 + 27 }{C_2} $</span></p>
<p>and</p>
<p><span class="math-container">$\dfrac{16 C_1 + 64 C_2 + 256 (1 -C_1 - C_2) }{4 C_3} = \dfrac{ -60 C_1 - 48 C_2 + 64 }{C_3} $</span></p>
<p>Since these expressions are equal as you have in your question, we can cross multiply to get</p>
<p><span class="math-container">$( - 6C_1 - 4C_2 +8 ) C_2 = (-24 C_1 - 18 C_2 + 27) C_1 \hspace{25pt}(1)$</span></p>
<p>and</p>
<p><span class="math-container">$(- 6 C_1 - 4 C_2+8) (1 -C_1 - C_2) = (-60 C_1 - 48 C_2 + 64 ) C_1 \hspace{25pt}(2)$</span></p>
<p>Equations (1) and (2) are two quadratic equations in <span class="math-container">$C_1 $</span> and <span class="math-container">$C_2$</span> and can be solved using the method outlined in the solution of <a href="https://math.stackexchange.com/questions/4345777/how-to-solve-system-of-equations-a-1x2-b-1xy-c-1y2-d-1x-e-1y-f-1/4347911#4347911">this problem</a></p>
|
907,154 | <p>Renata walks down an escalator that moves up and counts <span class="math-container">$150$</span> steps. Her sister Fernanda climbs the same escalator and counts <span class="math-container">$75$</span> steps. If the speed of Renata (in steps per time unit) is three times the speed of Fernanda, determine how many steps are visible on the escalator at any time. </p>
<blockquote>
<p>The answer is 120 steps.</p>
</blockquote>
<p>This is my first post ever on stackexchange, hello!</p>
| Ashitaka | 171,300 | <p>Let ${ t }_{ 1 }$ and ${ t }_{ 2 }$ be the time that Renata and Fernanda, respectively, need to complete the trip. And $x$ the number of visible steps.</p>
<p>Considering Renata's trip and knowing that ${ v }_{ r }=3{ v }_{ f }$</p>
<h1>$3v_{ f }-v_{ e }=\frac { x }{ t_{ 1 } } \\ v_{ f }=\frac { 150 }{ { t }_{ 1 } }$</h1>
<p>Dividing the first equation by the second, we find:</p>
<h2>$x=\frac { 50(3v_{ f }-v_{ e }) }{ v_{ f } } (I)$</h2>
<p>Now, considering Fernanda's trip:</p>
<h2>$v_{ f }+v_{ e }=\frac { x }{ { t }_{ 2 } } \\ v_{ f }=\frac { 75 }{ t_{ 2 } }$</h2>
<p>Dividing the first equation by the second, we find:</p>
<h2>$x=\frac { 75(v_{ f }+v_{ e }) }{ v_{ f } } (II)$</h2>
<p>And we have:</p>
<h2>$(I)=(II)→50(3v_{ f }-v_{ e })=75(v_{ f }+v_{ e })→\frac { v_{ e } }{ v_{ f } } =\frac { 3 }{ 5 } (III)$</h2>
<p>Now, using (III) in (II):</p>
<h2>$x=75\left( 1+\frac { v_{ e } }{ v_{ f } } \right) =75\left( 1+\frac { 3 }{ 5 } \right) =120$</h2>
|
199,549 | <p>By using the Löwenheim–Skolem theorem & Mostowski collapse, in every model $V$ of $ZF+Con(ZF)$ there is a countable transitive set $M$ such that $(M,\in_M) \models ZF$. Is the following "converse" true?</p>
<blockquote>
<p>In every model $V$ of $ZF$ and every transitive set $M \in V$ such that $(M,\in_M) \models ZF$, there exists a transitive set $N \in V$ such that:</p>
<ol>
<li><p>$M \in N$</p></li>
<li><p>$(N,\in_N) \models ZF$</p></li>
<li><p>$M$ is countable inside $N$</p></li>
</ol>
</blockquote>
| Nate Ackerman | 8,106 | <p>The answer to your question is no. For example, suppose $\kappa$ is inaccessible and $M = V_{\kappa}$. Then $M$ is a model of $\sf ZF$ but $M$ cannot be countable in $N$ as it is not countable in $V$.</p>
<p>Of course there will always be a set generic extension of $V$ containing such an $N$ (just generically add a bijection from $M$ to omega)</p>
|
199,549 | <p>By using the Löwenheim–Skolem theorem & Mostowski collapse, in every model $V$ of $ZF+Con(ZF)$ there is a countable transitive set $M$ such that $(M,\in_M) \models ZF$. Is the following "converse" true?</p>
<blockquote>
<p>In every model $V$ of $ZF$ and every transitive set $M \in V$ such that $(M,\in_M) \models ZF$, there exists a transitive set $N \in V$ such that:</p>
<ol>
<li><p>$M \in N$</p></li>
<li><p>$(N,\in_N) \models ZF$</p></li>
<li><p>$M$ is countable inside $N$</p></li>
</ol>
</blockquote>
| Joel David Hamkins | 1,946 | <p>Let me point out that a close variant of the question has a
positive answer. Namely,</p>
<p><strong>Theorem.</strong> Every model of set theory $M\models\text{ZFC}$ has an
elementary extension $M\prec M^*$ such that there is a model
$N\models\text{ZFC}$ inside of which $\langle
M^*,\in^{M^*}\rangle$ is a countable transitive model.</p>
<p><strong>Proof.</strong> Fix any model $\langle
M,\in^M\rangle\models\text{ZFC}$. First, I claim that there is a
model $W\models\text{ZFC}$ with a cardinal $\delta$ such that
$$M\prec V_\delta^W\prec W.$$ To see this, consider the theory
$T$, in the language of set theory augmented with a constant
symbol $\delta$, which asserts the elementary diagram of $M$,
together with the assertions that $\delta$ is an ordinal and every
element of $M$ is in $V_\delta$ and also the scheme $V_\delta\prec
V$, asserting that $\delta$ is a <a href="http://cantorsattic.info/Reflecting">correct</a> cardinal. The
reflection theorem shows that every finite subset of this theory
is consistent, and so $T$ has a model $W$, which means that
$M\prec V_\delta^W\prec W$, as desired.</p>
<p>Second, we will go to a further extension in which (the image of)
$V_\delta^W$ becomes countable. Specifically, inside $W$ consider
the forcing that makes $\delta$ countable. Let $U\in W$ be an
ultrafilter on the corresponding complete Boolean algebra
$\mathbb{B}$, and let $j:W\to \check W_U\subset W^{\mathbb{B}}/U$
be the Boolean ultrapower embedding. This embedding and these
models exist inside $W$, without any actual forcing; I am just
taking a quotient by an ultrafilter in $V$. You can find further
explanation in my paper, <a href="http://jdh.hamkins.org/boolean-ultrapowers/">Well-founded Boolean ultrapowers as
large cardinal embeddings</a>.</p>
<p>Note that $j:V_\delta^W\to j(V_\delta^W)=[{\check V}_\delta^W]_U$ is an elementary
embedding, and furthermore, $j(V_\delta^W)$ is a countable
transitive set inside $W^{\mathbb{B}}/U$. So let
$M^*=j(V_\delta^W)$ and $N=W^{\mathbb{B}}/U$, and we have achieved
the statement of the theorem. <strong>QED</strong></p>
<p>The theorem applies to the case $M=V_\kappa$ in Nate's answer as
follows: the model $M^*$ will be $\omega$-nonstandard (as in
Asaf's answer), with $\kappa$ many natural numbers, and so the
former contradiction of Nate's answer does not engage. The
uncountable model $M$ is thought countable inside $N$, since $N$
has a lot of natural numbers.</p>
<p>One can actually improve the theorem to have
$N\models\text{ZFC}+V=L$, since every countable model is a
transitive model inside a model of $V=L$. This and many other
similar results are proved and discussed in my paper, <a href="http://jdh.hamkins.org/multiverse-perspective-on-constructibility/">A
multiverse perspective on the axiom of constructibility</a>.</p>
|
3,866,285 | <p>Suppose I'd like to find the coefficient of <span class="math-container">$x^{l}$</span> in the expansion of <span class="math-container">$(1+x+x^{2}+...+x^{n})^{m}$</span>, where <span class="math-container">$n$</span> and <span class="math-container">$m$</span> are given positive integers, for some given integer <span class="math-container">$l$</span> such that <span class="math-container">$n < l < mn$</span>. Is there a straightforward formula (in terms of some multinomial coefficient, or similar)?</p>
| user158293 | 158,293 | <p>For n=1 the answer is obvious so the obvious assumptions are n>1,m>1. For sufficiently small m or n you could likely write a reasonable formula but for general m,n I am not sure of a 'straightforward' or 'closed form' answer, depending upon exactly what one would classify as being in one of those 2 categories. Though one may express the answer perhaps in the form:</p>
<p><span class="math-container">$\sum_{m_1=0}^{min(m,L)}\sum_{m_2=0}^{min(m,L)}\dots \sum_{m_{min(L,n)}=0}^{min(m,L)}C(m_1,m_2,\dots,m_{min(L,n)})\prod_{j=0}^{min(L,n)}f_{j}^{(m-m_j)}\;$</span></p>
<p>where <span class="math-container">$f_j$</span> is <span class="math-container">$\frac{d^j}{d\:x^j}\sum_{i=0}^{n}x^i\;$</span> , the superscripts <span class="math-container">$(m-m_j)$</span> are the powers or exponents
and the <span class="math-container">$C(m_1,m_2,\dots,m_{min(L,n)})$</span> are constants which are the result of differentiating <span class="math-container">$\left(\sum_{i=0}^{n}x^i\right)^m \;$</span> L times where have written the capital L to stand for your <span class="math-container">$l$</span>. Then we can set <span class="math-container">$x=0$</span> and divide by <span class="math-container">$L!$</span> to obtain the answer for desired coefficient which would then be</p>
<p><span class="math-container">$\frac1{L!}\left(\sum_{m_1=0}^{min(m,L)}\sum_{m_2=0}^{min(m,L)}\dots \sum_{m_{min(L,n)}=0}^{min(m,L)}C(m_1,m_2,\dots,m_{min(L,n)})\prod_{j=0}^{min(L,n)}({j!})^{(m-m_j)}\right)\;$</span></p>
<p>Though finding and exressing the <span class="math-container">$C(m_1,m_2,\dots,m_{min(L,n)})$</span> coeff's may be quite unwieldly and have not done so as yet so<br />
In order that the length of writing the formula or answer does not increase proportional to m or n I choose to solve it by a recursive routine which means it calls itself and is as follows. Have tried to write it in a kind of universal hopefully simple enough language that you could easily transfer it to fortran(my favorite) or basic or most any other symbolic math programs etc. The final answer, that is coeff. of <span class="math-container">$x^L$</span>, is cf and have named the recursive routine nicf(ni,nl,nm,nn)
with it's 4 arguments in parenthesis which follows where I have written capital L for your <span class="math-container">$l$</span>.</p>
<p>nicf(ni,nl,nm,nn): for i=1 thru <span class="math-container">$min(\frac{nl}{nn},nm)$</span> do( if <span class="math-container">$nl\!-\!i\:nn\!<\!nm\!-\!i\!+\!1$</span> then <span class="math-container">$cf\!=\!cf\!+\!ni\binom{nm}{i}\binom{nm-i}{nl-i\:nn}$</span>
,if <span class="math-container">$nm\!-\!i>0$</span> then for j=2 thru <span class="math-container">$min(nn\!-\!1,nl\!-\!i\:nn)$</span> do nicf(<span class="math-container">$ni\binom{nm}{i},nl\!-\!i\:nn,nm\!-\!i,j$</span>) )</p>
<p>All quantities are integers and all terms in the routine are local except the accumulated answer cf which is global. Note <span class="math-container">$ni\binom{nm}{i}...$</span> means <span class="math-container">$ni$</span> times <span class="math-container">$\binom{nm}{i}...\;$</span> and <span class="math-container">$\;i\:nn$</span> means <span class="math-container">$i$</span> times <span class="math-container">$nn$</span>, the perhaps small figure '1' and the <span class="math-container">$1$</span> both mean the same which is one, and where have written
...do cf(...) it means same as call cf(...) which may be more common in other languages. The <span class="math-container">$\binom{a}{b}$</span> means <span class="math-container">$\frac{a!}{b!(a\!-\!b)!}$</span>. Now assuming you understand that routine it is used as follows in a simple program.</p>
<p>if L<m+1 then cf=<span class="math-container">$\binom{m}{L}$</span> else cf=<span class="math-container">$0$</span>, for k=max(ceiling(<span class="math-container">$\frac{L}m$</span>),2) thru n do nicf(<span class="math-container">$1$</span>,L,m,k)</p>
<p>And the final answer will be cf=coeff. of <span class="math-container">$x^L\;$</span> Have tested it in symbolic program maxima for sufficiently large enough m,n,L to be reasonably sure it is correct. To try and write a better general formula etc. it seems it may be good to study partitions and perhaps generating functions such as in Knuth 'Art of Computer Programming' vol I fundamental algorithms.</p>
|
3,005,965 | <p>I was making use of polynomial long division in inverse Z transform and I got stuck in a brainfart in one stage of the polynomial long division.</p>
<p>I posted the original question into digital signal processing stack exchange, but nobody answered it so I thought about sharing the link to math stack exchange.</p>
<p>So here is the link to the my earlier post containing the question and my concerns</p>
<p><a href="https://dsp.stackexchange.com/questions/53426/inverse-z-transform-confused-about-polynomial-long-division-lti-and-causal">https://dsp.stackexchange.com/questions/53426/inverse-z-transform-confused-about-polynomial-long-division-lti-and-causal</a></p>
<p>if you don't want to look at the link then I can put the same photo here and show the direct question also</p>
<p><a href="https://i.stack.imgur.com/K5VYs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K5VYs.jpg" alt="enter image description here"></a></p>
<p>I don't know what I should do here, I have two terms (17/2−5z). Which one do you divide by (2z^2) and why is that?</p>
| Henno Brandsma | 4,280 | <p>The long division algorithm for dividing <span class="math-container">$a(z)$</span> by <span class="math-container">$b(z)$</span> is meant to find
the unique polynomials <span class="math-container">$q(z)$</span> and <span class="math-container">$r(z)$</span> such that</p>
<p><span class="math-container">$$a(x) = q(z)b(z) + r(z)$$</span> with the degree of <span class="math-container">$r(z)$</span> strictly less than the degree of <span class="math-container">$b(z)$</span>.</p>
<p>In your case after one step you already have this situation for <span class="math-container">$q(z)=2$</span> (constant) and <span class="math-container">$r(z) = 5z-4$</span> which has degree 1 < degree <span class="math-container">$b(z)$</span> which is <span class="math-container">$2$</span>. So you're done after one step.</p>
|
3,915,141 | <p>Let <span class="math-container">$(G,.)$</span> be a group with <span class="math-container">$e$</span> as an identity element. If <span class="math-container">$x\in G$</span> and <span class="math-container">$C_G(x)=\left\{g \in G: gx=xg\right\}$</span> and <span class="math-container">$|x|=5$</span>. In the process of proving <span class="math-container">$C_G(x)=C_G(x^3)$</span>, i proved <span class="math-container">$C_G(x)=C_G(x^2)$</span> as follows:</p>
<p>Let <span class="math-container">$g \in C_G(x)$</span></p>
<p>So <span class="math-container">$gx=xg$</span></p>
<p>Now <span class="math-container">$gx^2=g(xx)=(gx)x=(xg)x=x(gx)=x(xg)=(xx)g=x^2g$</span></p>
<p>So all <span class="math-container">$g \in C_G(x)$</span> satisfies <span class="math-container">$gx^2=x^2g$</span></p>
<p>So can we conclude <span class="math-container">$$C_G(x)=C_G(x^2)$$</span></p>
| Nicky Hekster | 9,605 | <p>In general: if <span class="math-container">$x \in G$</span> and <span class="math-container">$o(x)=n$</span>, then for every non-zero integer <span class="math-container">$k$</span> with gcd<span class="math-container">$(k,n)=1$</span> it holds that <span class="math-container">$C_G(x)=C_G(x^k)$</span>.<br>
<strong>Proof</strong> by Bézout's Theorem, <span class="math-container">$a,b \in \mathbb{Z}$</span> exist with <span class="math-container">$ak+bn=1$</span>. Hence <span class="math-container">$x=x^{ak+bn}=x^{ak}x^{bn}=(x^k)^a \cdot 1=(x^k)^a$</span>, so any element centralizing <span class="math-container">$x^k$</span> centralizes <span class="math-container">$x$</span> and the reverse is obvious.</p>
|
30,220 | <p>Jeremy Avigad and Erich Reck claim that one factor leading to abstract mathematics in the late 19th century (as opposed to concrete mathematics or hard analysis) was <em>the use of more abstract notions to obtain the same results with fewer calculations.</em></p>
<p>Let me quote them from their remarkable historical <a href="https://www.andrew.cmu.edu/user/avigad/Papers/infinite.pdf" rel="nofollow noreferrer">paper</a> "Clarifying the nature of the infinite: the development of metamathematics and proof theory".</p>
<blockquote>
<p>The gradual rise of the opposing viewpoint, with its emphasis on conceptual
reasoning and abstract characterization, is elegantly chronicled by <a href="http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Stein</a> (<a href="https://web.archive.org/web/20140415224643/http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Wayback Machine</a>),
as part and parcel of what he refers to as the “second birth” of mathematics.
The following quote, from Dedekind, makes the difference of opinion very clear:</p>
</blockquote>
<blockquote>
<blockquote>
<p>A theory based upon calculation would, as it seems to me, not offer
the highest degree of perfection; it is preferable, as in the modern
theory of functions, to seek to draw the demonstrations no longer
from calculations, but directly from the characteristic fundamental
concepts, and to construct the theory in such a way that it will, on
the contrary, be in a position to predict the results of the calculation
(for example, the decomposable forms of a degree).</p>
</blockquote>
</blockquote>
<blockquote>
<p>In other words, from the Cantor-Dedekind point of view, abstract conceptual
investigation is to be preferred over calculation.</p>
</blockquote>
<p><strong>What are concrete examples from concrete fields avoiding calculations by the use of abstract notions?</strong> (Here "calculation" means any type of routine technicality.) Category theory and topoi may provide some examples.</p>
<p>Thanks in advance.</p>
| Andrey Rekalo | 5,371 | <p><a href="https://en.wikipedia.org/wiki/David_Hilbert#The_finiteness_theorem" rel="nofollow noreferrer">Hilbert's finiteness theorem</a>, which arguably "killed classical invariant theory" and resulted in the creation of abstract algebra.</p>
<blockquote>
<p>Hilbert's first work on invariant functions led him to the demonstration in 1888 of his famous finiteness theorem. Twenty years earlier, Paul Gordan had demonstrated the theorem of the finiteness of generators for binary forms using a complex computational approach. The attempts to generalize his method to functions with more than two variables failed because of the enormous difficulty of the calculations involved. Hilbert realized that it was necessary to take a completely different path. As a result, he demonstrated <a href="https://en.wikipedia.org/wiki/Hilbert%27s_basis_theorem" rel="nofollow noreferrer">Hilbert's basis theorem</a>: showing the existence of a finite set of generators, for the invariants of quantics in any number of variables, but in an abstract form. That is, while demonstrating the existence of such a set, it was not a constructive proof — it did not display "an object" — but rather, it was an existence proof and relied on use of the Law of Excluded Middle in an infinite extension.</p>
</blockquote>
|
30,220 | <p>Jeremy Avigad and Erich Reck claim that one factor leading to abstract mathematics in the late 19th century (as opposed to concrete mathematics or hard analysis) was <em>the use of more abstract notions to obtain the same results with fewer calculations.</em></p>
<p>Let me quote them from their remarkable historical <a href="https://www.andrew.cmu.edu/user/avigad/Papers/infinite.pdf" rel="nofollow noreferrer">paper</a> "Clarifying the nature of the infinite: the development of metamathematics and proof theory".</p>
<blockquote>
<p>The gradual rise of the opposing viewpoint, with its emphasis on conceptual
reasoning and abstract characterization, is elegantly chronicled by <a href="http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Stein</a> (<a href="https://web.archive.org/web/20140415224643/http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Wayback Machine</a>),
as part and parcel of what he refers to as the “second birth” of mathematics.
The following quote, from Dedekind, makes the difference of opinion very clear:</p>
</blockquote>
<blockquote>
<blockquote>
<p>A theory based upon calculation would, as it seems to me, not offer
the highest degree of perfection; it is preferable, as in the modern
theory of functions, to seek to draw the demonstrations no longer
from calculations, but directly from the characteristic fundamental
concepts, and to construct the theory in such a way that it will, on
the contrary, be in a position to predict the results of the calculation
(for example, the decomposable forms of a degree).</p>
</blockquote>
</blockquote>
<blockquote>
<p>In other words, from the Cantor-Dedekind point of view, abstract conceptual
investigation is to be preferred over calculation.</p>
</blockquote>
<p><strong>What are concrete examples from concrete fields avoiding calculations by the use of abstract notions?</strong> (Here "calculation" means any type of routine technicality.) Category theory and topoi may provide some examples.</p>
<p>Thanks in advance.</p>
| Joel David Hamkins | 1,946 | <p>Arguments by mathematical induction seem to provide an entire class of examples of the phenomenon, where computation is replaced by a higher level of reasoning.</p>
<p>With induction, one uses a comparatively abstract understanding of how a property propagates from smaller instances to larger instances, in order to arrive at a fuller understanding of the property in particular cases, without need for explicit calculation. Thus, one can see that a particular finite graph or group or whatever kind of structure has a property, not by calculating it in that instance, but by an abstract inductive argument, on size or degree or rank or whatever. A complex graph-theoretic calculation is avoided by understanding what happens in general when a point is deleted.</p>
<p>And there are, of course, extremely concrete elementary instances. We all know, for example, how to use
induction to prove that <span class="math-container">$1+2+\cdots+n=n(n+1)/2$</span>. Thus, the
comparatively abstract inductive argument predicts definite
values for concrete sums <span class="math-container">$1+2+\cdots+105$</span>. Similarly, we often understand the iterates of a function <span class="math-container">$f^n(x)$</span> without calculating them, or the powers of a matrix <span class="math-container">$A^n$</span>, or the successive derivatives of a function, all without calculation, by understanding the inductive relationship in effect at each step.</p>
<p>Surely mathematics is covered with dozens or hundreds of similar examples, of every degree of complexity and every level of abstraction.</p>
|
30,220 | <p>Jeremy Avigad and Erich Reck claim that one factor leading to abstract mathematics in the late 19th century (as opposed to concrete mathematics or hard analysis) was <em>the use of more abstract notions to obtain the same results with fewer calculations.</em></p>
<p>Let me quote them from their remarkable historical <a href="https://www.andrew.cmu.edu/user/avigad/Papers/infinite.pdf" rel="nofollow noreferrer">paper</a> "Clarifying the nature of the infinite: the development of metamathematics and proof theory".</p>
<blockquote>
<p>The gradual rise of the opposing viewpoint, with its emphasis on conceptual
reasoning and abstract characterization, is elegantly chronicled by <a href="http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Stein</a> (<a href="https://web.archive.org/web/20140415224643/http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Wayback Machine</a>),
as part and parcel of what he refers to as the “second birth” of mathematics.
The following quote, from Dedekind, makes the difference of opinion very clear:</p>
</blockquote>
<blockquote>
<blockquote>
<p>A theory based upon calculation would, as it seems to me, not offer
the highest degree of perfection; it is preferable, as in the modern
theory of functions, to seek to draw the demonstrations no longer
from calculations, but directly from the characteristic fundamental
concepts, and to construct the theory in such a way that it will, on
the contrary, be in a position to predict the results of the calculation
(for example, the decomposable forms of a degree).</p>
</blockquote>
</blockquote>
<blockquote>
<p>In other words, from the Cantor-Dedekind point of view, abstract conceptual
investigation is to be preferred over calculation.</p>
</blockquote>
<p><strong>What are concrete examples from concrete fields avoiding calculations by the use of abstract notions?</strong> (Here "calculation" means any type of routine technicality.) Category theory and topoi may provide some examples.</p>
<p>Thanks in advance.</p>
| John Stillwell | 1,587 | <p>An example of a slightly different kind -- not eliminating all calculation, but
showing that "all calculations are easy" -- is <a href="https://en.wikipedia.org/wiki/Small_cancellation_theory" rel="nofollow noreferrer">Dehn's algorithm</a> in
combinatorial group theory. Dehn showed, using the combinatorics of
hyperbolic tessellations, that the word problem for surface groups is solvable
using only obvious word reductions.</p>
<p>In this case, calculation is avoided not so much by abstraction, but by
thinking geometrically rather than algebraically.</p>
|
260,037 | <p>Problem: </p>
<p>How many triangles do $m$ lines form if </p>
<p>a) Every two lines intersect and no three lines intersect at one point.</p>
<p>b) There are $n$ lines among $m$ lines that are parallel to each other. No other line is parallel to these $n$ lines, and no other two lines are parallel to each other. Again no three lines intersect at one point.</p>
<p>Thank you. </p>
| ashley | 50,188 | <p>Take one of these lines. Ever pair of lines intersecting it is forming a triangle-- combination of $m-1$ to $2$. </p>
<p>Do this for every line-- excluding the ones you already worked on. So the second line you are looking at has $m-2$ to $2$ additional triangles formed by itself and the ones intersecting it, ... . </p>
<p>All add up to $\sum_{i=2}^{m-1}i\times(i-1)/2$. </p>
|
260,037 | <p>Problem: </p>
<p>How many triangles do $m$ lines form if </p>
<p>a) Every two lines intersect and no three lines intersect at one point.</p>
<p>b) There are $n$ lines among $m$ lines that are parallel to each other. No other line is parallel to these $n$ lines, and no other two lines are parallel to each other. Again no three lines intersect at one point.</p>
<p>Thank you. </p>
| Ross Millikan | 1,827 | <p>There are $n$ parallel lines. Each pair of the $m-n$ forms a triangle with each of the $n$. So the triangles are ${m-n \choose 2}n=\frac 12(m-n)(m-n-1)n$</p>
|
1,414,316 | <p>I am trying to optimize distance from point to plane using Lagrange multiplier.</p>
<p>Usually for such problems you are given specific point like (1,2,3) in 3D, and then an exact plane which is just the subject of Lagrange. But what I have here doesn't specify values for point and plane.</p>
<p>It says problem happens in a D-dimensional space. It denotes the point as X, the plane as (wT)x+b=0, where wT is the transpose of w (which is just a n-by-1 matrix I suppose), and requires me to use Lagrange to optimize the distance. The final result should be expressed with w, b and X. Without specific values I am really lost in how to approach such kind of problem. Any suggestions?</p>
| Miha Habič | 9,440 | <p>To complement Asaf's answer, let me point out that you don't need any large cardinals at all to get embeddings like this; they exist as long as there is an uncountable transitive model of ZFC+"there are no inaccessible cardinals".</p>
<p>If $N$ is such an uncountable model we can get an embedding by taking a countable elementary substructure $X\prec N$ and collapsing it to a transitive $M$. The inverse collapse map gives us the embedding $j\colon M\to N$.
Since the critical point $\kappa$ has the same properties in $M$ as $j(\kappa)$ has in $N$ and $N$ had no inaccessibles, $\kappa$ cannot be inaccessible in $M$.</p>
<p>It is less clear to me what happens if there are no uncountable transitive models of this theory lying around. First of all, there might not be any uncountable transitive models of ZFC at all. And secondly, it doesn't seem impossible that every uncountable transitive model believes that there is an inaccessible (and when you chop it off at that inaccessible the model becomes countable). On the other hand, we are still ok in this second case, at least consistency-wise, since we can pass to $L$ where this cannot happen.</p>
|
1,818,976 | <p>Let there be many numbers $a_1,a_2,a_3,\dots,a_n$.</p>
<p>I want to find the first digit of their product, i.e. of $A=a_1\times a_2\times a_3\times a_4\times \dots\times a_n$.</p>
<p>These numbers are huge and multiplying all of them exceeds the time limit.</p>
<p>Is there any shortcut to find the most significant digit of $A$ (first digit from the left)?</p>
| Tsemo Aristide | 280,301 | <p>Hint: The set $R$ of the remainders of the division of $2\pi nr, n\in N$ by $2\pi$ is dense in $[0,2\pi]$. Remark that $R$ is $2\pi$ $\{$ $rn-\lfloor rn \rfloor$, $n\in N\}$.</p>
|
238,128 | <p>Let $G$ be an abelian group. <br/>
Show that $\{x\in{G} | |x| < \infty\}$ is a subgroup of $G$. Give an example of a non-abelian group where this fails to be a subgroup.</p>
| Thomas | 26,188 | <p>I assume that by $\lvert g \lvert$ you mean the order of $g$, so that the subset you are considering is the subset of all elements with finite order. Call this subset $X$.</p>
<p>Without actually doing the problem for you, here are a few things that might help.</p>
<p>To check that a subset of a group is a subgroup, all you need to do is to check that the subset itself satisfies the axioms for a group. First, then, you need to figure out whether if $g,h\in X$ then $gh\in X$. In your case this translates to the question about whether the product of two elements with finite order has finite order. Since your group $G$ is abelian, then this shouldn't be to hard to prove.</p>
<p>Then you need to check that all elements in $X$ have an inverse in $X$. An element $g\in G$ obviously has an inverse in $G$, but is this inverse actually in $X$? Well, if $g$ has finite orderm it shouldn't be too hard to see that $-g$ (writing the group additively because it is abelian) also has finite order (Hint: $-0 = 0$).</p>
<p>Without listing all the axioms, you also have to check that the addition of to elements of finite order has finite order.</p>
<p>Googeling or looking in a book will give you other things that you need to check.</p>
<p>As for the second question about an example of $G$ non-abelian where $X$ is not a subgroup, you can take a look at this qusetion: <a href="https://math.stackexchange.com/questions/208312/tg-may-not-be-a-subgroup">$T(G)$ may not be a subgroup?</a> (As mentioned in the comments by @anon). Note, though, that in this question the group is written multiplicative. </p>
<p>Hopefully this was helpful.</p>
|
1,336,381 | <p>Find the sum of solutions to:</p>
<p>$$ 2\log^2_{4}(|x+1|)+\log_4(|x^2-1|)+\log_{\frac{1}{4}}(|x-1|)=0 $$</p>
<p>I'm not sure about what to do with the absolute values, how can I get rid of them?</p>
<p>Should I solve for all various cases depending on the sign of $x+1$ and $x-1$?</p>
| Ian | 83,396 | <p>It is not true. The classic example is in a slightly different setup: $f_n(x)=x^n$ converges pointwise to zero on $[0,1)$, uniformly on $[0,1-\delta]$ for every $\delta \in (0,1)$, but it does not converge uniformly on all of $[0,1)$. If it did, then one could interchange limits and conclude that $\lim_{n \to \infty} f_n(1)=0$, but this limit is $1$.</p>
<p>You can convert this example to your setup easily enough by considering $f_n(x)=(1-x)^n$.</p>
|
344,166 | <p>I was for some time curious about William Feller's probability tract (first volume); luckily, I could lay my hands on it recently and I find it of super qualities. It provides a complete exposition of elementary(no measures) probability. The book is rigorous "hard" math but doesn't escape from giving a solid intuitive feeling. The author discusses a topic, mentions an example, proposes different scenarios that gives back more math. His first chapter on "nature of probability" is essential. It gives a good feeling for what statistical probability means, and why/how it was defined as it is. </p>
<p>Question: I'm looking for other math books on fundamental mathematics(algebra, real analysis, etc...)- essential mathematics that is not very advanced(algebraic geometry for example) - of high qualities like Feller's probability text. Feller might not be used anymore, but its full of exercises that would make it a working textbook written by a master.</p>
<p>To be specific and not too general. I'm looking exactly for inspiring Feller style books in real analysis and abstract algebra. Rudin is good, but its not a master book. I don't know much about abstract algebra available textbooks/master expositions. </p>
| Community | -1 | <p>I highly recommend Artin's <em><a href="http://rads.stackoverflow.com/amzn/click/0130047635" rel="nofollow">Algebra</a></em>. In my opinion, it fits your criteria nicely; topic introduction, concrete example, and then thought provoking discussions that pique your interest. The exercises are fantastic.</p>
<p>If Artin is not advanced enough, then I second Jorge's recommendation for Paolo Aluffi.</p>
|
4,205,906 | <p>Since X and Y are independent and uniform I have the joint density function f(x,y)=1 but Im not sure where to go from there. I keep getting that the answer is f(z)=1 but this doesnt make sense since the range of z is from 0 to infinity.</p>
<p>So if I make the substitution Z=Y/X and W=X, I get Y=ZW and X=W. The 4 derivatives for the Jacobian are <span class="math-container">$\frac{dx}{dz}=0$</span>, <span class="math-container">$\frac{dx}{dw}=1$</span>, <span class="math-container">$\frac{dy}{dz}=W$</span> and <span class="math-container">$\frac{dy}{dw}=Z$</span> which give |J|=W and so f(z,w)=<span class="math-container">$1*w$</span></p>
<p>The range of W is 0,1 which and integrating W over that range just gives f(z)=1. This answer doesnt make sense to me since the range of Z is 0 to <span class="math-container">$\infty$</span> and f(z) should integrate to 1 over that range. I think I am making a mistake somewhere right after I get f(z,w)=w but Im not sure where.</p>
| Vons | 274,987 | <p>The p-value is the smallest level of significance <span class="math-container">$\alpha_0$</span> that I would reject the null hypothesis with the observed data.</p>
<p>In olden times people simply reported a result, "rejected the null" or "did not reject the null." This doesn't tell me how close I was to not rejecting the null hypothesis. If a researcher rejected the null hypothesis at significance level <span class="math-container">$0.05$</span>, this doesn't tell other researchers if they would reject the null hypothesis at significance level <span class="math-container">$0.01$</span>, which is a stricter condition. Hence, it has become common practice to report <em>all</em> <span class="math-container">$\alpha_0$</span>'s at which <span class="math-container">$H_0$</span> would be rejected.</p>
<p>If the test <span class="math-container">$\delta$</span> is of the form "Reject <span class="math-container">$H_0$</span> if <span class="math-container">$T\ge c$</span>" for a test statistic <em>T</em>, and a value of <span class="math-container">$T=t$</span> is observed, then the p-value equals the size of the test <span class="math-container">$\delta_t$</span> with <span class="math-container">$\delta_t$</span> being the test that rejects <span class="math-container">$H_0$</span> if <span class="math-container">$T\ge t$</span>:</p>
<p><span class="math-container">$$\text{p-value}=\alpha(\delta_t)=\sup_{\theta\in\Omega_0}\pi(\theta|\delta_t)=\sup_{\theta\in\Omega_0}\Pr(T\ge t|\theta)$$</span></p>
<p>It's often called the chance of observing a dataset as extreme as the one observed if the null hypothesis is true, which is true if the null hypothesis is simple (contains only one point) or if the power is maximized on a boundary point of <span class="math-container">$\Omega_0$</span> (which is often the case).</p>
|
1,630,655 | <p>Not sure what to do / how to start this... I have equcation of 504 is: $2 \cdot2 \cdot 2 \cdot 3 \cdot 3 \cdot 7$</p>
| Justpassingby | 293,332 | <p>Hint: decompose the polynomial $n^9-n^3$ into linear and quadratic irreducible factors. Observe divisibility for $2^3,$ $3^2$ and $7$ separately.</p>
|
1,630,655 | <p>Not sure what to do / how to start this... I have equcation of 504 is: $2 \cdot2 \cdot 2 \cdot 3 \cdot 3 \cdot 7$</p>
| robjohn | 13,854 | <p>$$
\begin{align}
&\frac{n^9-n^3}{504}\\
&=\small720\binom{n}{9}+2880\binom{n}{8}+4620\binom{n}{7}+3780\binom{n}{6}+1655\binom{n}{5}+370\binom{n}{4}+36\binom{n}{3}+\binom{n}{2}
\end{align}
$$</p>
|
322,858 | <p>Let <span class="math-container">$G$</span> be a split semisimple real Lie group in characteristic zero, and let <span class="math-container">$B=TU$</span> be a Borel subgroup with unipotent radical <span class="math-container">$U$</span> and Levi <span class="math-container">$T$</span>. Fix an ordering on the roots <span class="math-container">$\Phi^+$</span> of <span class="math-container">$T$</span> in <span class="math-container">$U$</span>, and for each root subgroup <span class="math-container">$U_{\alpha}$</span> of <span class="math-container">$U$</span>, let <span class="math-container">$u_{\alpha}: \mathbb R \rightarrow U_{\alpha}$</span> be an isomorphism.</p>
<p>For all <span class="math-container">$\alpha, \beta \in \Phi^+$</span>, there exist unique real numbers <span class="math-container">$C_{\alpha,\beta,i,j}$</span> (depending on the <span class="math-container">$u_{\alpha}$</span> and the ordering) such that for all <span class="math-container">$x, y \in \mathbb R$</span>,</p>
<p><span class="math-container">$$u_{\alpha}(x) u_{\beta}(y) u_{\alpha}(x)^{-1} = u_{\beta}(y) \prod\limits_{\substack{i,j>0\\ i\alpha + j \beta \in \Phi^+}} u_{i\alpha+j\beta}(C_{\alpha,\beta,i,j}x^iy^j)$$</span></p>
<p>I want to work out some examples with unipotent groups of exceptional semisimple groups, and am looking for table of structure constants for the root system G2. Does anyone know a reference where an ordering on the roots is chosen and these constants are written down?</p>
| Peter McNamara | 425 | <p>SGA III, Expose XXIII, Section 3.4.</p>
|
2,579,156 | <p>I found the solution of series on Wolfram Alpha
<a href="http://www.wolframalpha.com/input/?i=sum+1%2F(2k%2B1)%2F(2k%2B2)+from+1+to+n" rel="nofollow noreferrer">http://www.wolframalpha.com/input/?i=sum+1%2F(2k%2B1)%2F(2k%2B2)+from+1+to+n</a></p>
<p>$ \sum\limits_{k=1}^{n} \left(\frac{1}{2k+1} - \frac{1}{2k+2}\right) = \sum\limits_{k=1}^{n} \frac{1}{(2k+1)(2k+2)} = \frac{1}{2} \left(H_{n+\frac{1}{2}} - H_{n+1} -1 + \text{ln}(4)\right)$</p>
<p>Can someone tell how to prove this in the form of Harmonic numbers?</p>
| omegadot | 128,913 | <p>We will make use of the following <a href="https://en.wikipedia.org/wiki/Harmonic_number#Hyperharmonic_numbers" rel="nofollow noreferrer">integral representation</a> for the harmonic numbers
$$H_x = \int^1_0 \frac{1 - t^x}{1 - t} \, dt. \tag1$$</p>
<p>Let
$$S = \sum^n_{k = 1} \frac{1}{(2k + 1)(2k + 2)} = \sum^n_{k = 1} \left (\frac{1}{2k + 1} - \frac{1}{2k + 2} \right ).$$
Noting that
$$\int^1_0 x^{2k} \, dx = \frac{1}{2k + 1} \quad \text{and} \quad \int^1_0 x^{2k + 1} \, dx = \frac{1}{2k + 2},$$
our sum can be rewritten as
$$S = \sum^n_{k = 1} \int^1_0 (x^{2k} - x^{2k + 1}) \, dx = \int^1_0 (1 - x) \sum^n_{k = 1} x^{2k} \, dx.$$
As the finite sum appearing here is geometric, it can be summed. As
$$\sum^n_{k = 1} x^{2k} = \frac{x^2 (1 - x^{2n})}{1 - x^2},$$
one has
$$S = \int^1_0 \frac{x^2 ( 1 - x^{2n})}{1 + x} \, dx = \int^1_0 \left [\frac{x^2}{1 + x} - \frac{x^{2n + 2}}{1 + x} \right ] \, dx = I_1 - I_2.$$</p>
<p>The first integral is trivial. Here
$$I_1 = \int^1_0 \frac{x^2}{1 + x} \, dx = \int^1_0 \left (x - 1 + \frac{1}{1 + x} \right ) \, dx = \left [\frac{x^2}{2} - x + \ln (1 + x) \right ]^1_0 = -\frac{1}{2} + \ln (2).$$</p>
<p>For the second integral
\begin{align*}
I_2 &= \int^1_0 \frac{x^{2n + 2}}{1 + x} \cdot \frac{1 - x}{1 - x} \, dx = \int^1_0 \frac{(1 - x) x^{2n + 2}}{1 - x^2} \, dx.
\end{align*}
Letting $x \mapsto \sqrt{x}$ gives
\begin{align*}
S &= \frac{1}{2} \int^1_0 \frac{x^{n + 1/2} - x^{n + 1}}{1 - x} \, dx\\
&= \frac{1}{2} \int^1_0 \frac{(1 - x^{n + 1}) - (1 - x^{n + 1/2})}{1 - x} \, dx\\
&= \frac{1}{2} \int^1_0 \frac{1 - x^{n + 1}}{1 - x} - \frac{1}{2} \int^1_0 \frac{1 - x^{n + 1/2}}{1 - x} \, dx\\
&= \frac{1}{2} \left (H_{n + 1} - H_{n + 1/2} \right ),
\end{align*}
where we in the last line we have made use of (1).</p>
<p>Thus
$$\sum^n_{k = 1} \frac{1}{(2k + 1)(2k + 2)} = \ln (2) - \frac{1}{2} - \frac{1}{2} \left (H_{n + 1/2} - H_{n + 1} \right ),$$
as required.</p>
|
424,404 | <p>I want to create an algorithm/calculation that helps me figure out if the price on a used vehicle is high or low.</p>
<p>My thoughts are that to calculate this i need a critical mass of previous vehicle sales with the same characteristics; manufacturer, model, year and trim. Having this data I must figure out each sale’s raw price. (The sale price divided into the mileage the vehicle had driven). Working on with the raw price of each previously sold car, i can then calculate the average price, this being the “market price”.</p>
<p>Finally using the market price i must be able to answer my initial question if a price is too high or low compared to the market price.</p>
<p>Take notice that this doesn’t include a vehicle's color, extra equipment etc. Some colors are more sought for, and some equipment adds value to a vehicle. I would calculate this by fixed values at the end of my calculation.</p>
<p>Any help on how to construct this formula is appreciated.. Its been a long time since thinking in terms of calculations!</p>
<p>Thanks,
Kristian</p>
| Ryan Walch | 62,355 | <p>You may just want to run a <a href="http://en.wikipedia.org/wiki/Ordinary_least_squares" rel="nofollow">regression</a> on price relative to the values you care about such as millage. You can then use the equation you estimate to see if the predicted value of the car you are looking at is larger or smaller to the price they are actually charging. </p>
|
2,807,356 | <blockquote>
<p><strong>If $z_1,z_2$ are two complex numbers such that $\vert
z_1+z_2\vert=\vert z_1\vert+\vert z_2\vert$,then it is necessary that</strong> </p>
<p>$1)$$z_1=z_2$</p>
<p>$2)$$z_2=0$</p>
<p>$3)$$z_1=\lambda z_2$for some real number $\lambda.$</p>
<p>$4)$$z_1z_2=0$ or $z_1=\lambda z_2$ for some real number $\lambda.$</p>
</blockquote>
<p>From Booloean logic we know that if $p\implies q$ then $q$ is necessary for $p$.</p>
<p>For $1)$taking $z_1=1$ and $z_2=2$ then $\vert 1+2 \vert=\vert 1\vert+\vert 2\vert$ but $1\neq 2$.So,$(1)$ is false.</p>
<p>For $2)$taking $z_1=1$ and $z_2=2$ then $\vert 1+2 \vert=\vert 1\vert+\vert 2\vert$ but $2\neq 0$.So,$(2)$ is false.</p>
<p>I'm not getting how to prove or disprove options $(3)$ and $(4)?$</p>
<p>Need help</p>
| user376343 | 376,343 | <p>In general, the numbers <span class="math-container">$z_1,\,z_2,\,z_1+z_2$</span> represent vectors <span class="math-container">$\vec{u},\,\vec{v},\,\vec{u}+\vec{v}$</span> starting from <span class="math-container">$O.$</span> Thus is <span class="math-container">$|z_1+z_2|$</span> the length of a diagonal in the parallelogram with the sides <span class="math-container">$|z_1|$</span> and <span class="math-container">$|z_2|.$</span> Due the triangular inequality is <span class="math-container">$$|z_1+z_2|\leq |z_1|+|z_2|.$$</span>
The equality occurs only in the degenerate case(s), where the vertices are collinear. This gives the necessary condition:</p>
<p><em>All four vertices of the parallelogram are equal OR they are different but collinear.</em></p>
<p>The answer 4) is right.</p>
|
191,984 | <p>In this context composition series means the same thing as defined <a href="http://en.wikipedia.org/wiki/Composition_series#For_groups" rel="noreferrer">here.</a></p>
<p>As the title says given a finite group <span class="math-container">$G$</span> and <span class="math-container">$H \unlhd G$</span> I would like to show there is a composition series containing <span class="math-container">$H.$</span></p>
<p>Following is my attempt at it.</p>
<p>The main argument of the claim is showing the following.</p>
<blockquote>
<p><strong>Lemma.</strong> If <span class="math-container">$H \unlhd G$</span> and <span class="math-container">$G/H$</span> is not simple then there exist a subgroup <span class="math-container">$I$</span> such that <span class="math-container">$H \unlhd I \unlhd G.$</span></p>
</blockquote>
<p>The proof follows from the 4th isomorphism theorem since if <span class="math-container">$G/H$</span> is not simple then there is a normal subgroup <span class="math-container">$\overline{I} \unlhd G/H$</span> of the form <span class="math-container">$\overline{I} = H/I.$</span></p>
<p>Suppose now that <span class="math-container">$G/H$</span> is not simple. Using the above lemma we deduce that there exist a finite chain of groups (since <span class="math-container">$G$</span> is finite) such that <span class="math-container">$$H \unlhd I_1 \unlhd \cdots \unlhd I_k \unlhd G$$</span> and <span class="math-container">$G/I_k$</span> is simple. Now one has to repeat this process for all other pairs <span class="math-container">$I_{i+1}/I_{i}$</span> and for <span class="math-container">$I_1/H$</span> until the quotients are simple groups. This is all fine since all the subgroups are finite as well.</p>
<p>Now if <span class="math-container">$H$</span> is simple we are done otherwise there is a group <span class="math-container">$J \unlhd H$</span> and we inductively construct the composition for <span class="math-container">$H.$</span></p>
<p>Is the above "proof" correct? If so, is there a way to make it less messy?</p>
| Community | -1 | <p>Suppose you have $A \triangleleft C$ with $C/A$ not simple. Then there is some nontrivial subgroup $B/A \triangleleft C/A$ (by Lattice Isomorphism Theorem), so you can extend $A \triangleleft C$ to $A \triangleleft B \triangleleft C$. By repeating this step you yield a composition series (it is clearly bounded, and will first terminate when all the composition factors are indeed simple).</p>
<p>Take $1 \triangleleft N \triangleleft G$ and apply above step. This constructs a composition series containing $N$.</p>
|
2,662,554 | <p>I have to use Proof by contradiction to show what if $n^2 - 2n + 7$ is even then $n + 1$ is even. </p>
<p>Assume $n^2 - 2n + 7$ is even then $n + 1$ is odd. By definition of odd integers, we have $n = 2k+1$. </p>
<p>What I have done so far:</p>
<p>\begin{align}
& n + 1 = (2k+1)^2 - 2(2k+1) + 7 \\
\implies & (2k+1) = (4k^2+4k+1) - 4k-2+7-1 \\
\implies & 2k+1 = 4k^2+1-2+7-1 \\
\implies & 2k = 4k^2 + 4 \\
\implies & 2(2k^2-k+2)
\end{align}</p>
<p>Now, this is even but I wanted to prove that this is odd(the contradiction). Can you some help me figure out my mistake? </p>
<p>Thank you.</p>
| Community | -1 | <blockquote>
<p>Prove that $n^2-2n+7$ is even <em>then</em> $n+1$ is even.</p>
</blockquote>
<p>If $n^2-2n+7$ is even, then $n^2-2n$ is odd.</p>
<p>For all even numbers, this does not work since $(even)^2-2(even)$ always results in an even number.</p>
<p>Therefore, the number $n$ <strong>must</strong> be an odd number. Since $n$ is odd, $n+1$ is even.</p>
|
1,279,873 | <p>Romeo and Juliet have a date at a given time, and each will arrive at the meeting place with a delay between 0 and 1 hour, with all pairs of delays being equally likely. The first to arrive will wait for 15 minutes and will leave if the other has not yet arrived. What is the probability that they will meet?</p>
<p>My text has the answer as 7/16, and I just don't get it. I'm just reading the book for self study - no one to ask!</p>
<p>My logic:</p>
<p>One of them has to arrive first, or they both arrive at the same time. The probability they arrive at the exact same time is zero. Suppose Romeo arrives first. </p>
<p>If Romeo is the first to arrive, and he arrives after 45min, they are guaranteed to meet. </p>
<p>A = P(Romeo arrives within the first 45min) = 45/60 = 3/4.<br>
B = P(Juliette arrives within some 15min interval) = 15/60 = 1/4.</p>
<p>P(A and B) = 3/4 * 1/4 = 3/16.</p>
<p>Help?</p>
| kilpikonna | 191,907 | <p>I would solve it like this (I'm sorry, but I'm not good at drawing pictures in LaTeX, so I've made it by my hand, hope it helps).</p>
<p>So let $x$ be the time whem Romeo arrives. He can arrive at any time between <strong>0</strong> and <strong>1</strong>, let $y$ be the time when Julia comes. </p>
<p>So the points [x,y] of the square are every possible combinations of times when they comes, if I can say it like that. The area of the square if <strong>1</strong>.</p>
<p>The situation that they come at the same moment is symbolized by the diagonal. But one of them can arrive even 15 minutes later - the upper line symbolize the
situation when Romeo will wait for Julia 15 minutes exactly, so the area between diagonal and upper line symbolize the situation when Romeo will wait for Julia 15 minutes or less. </p>
<p>The lower line, on the other hand, represents the situation when Julia will wait for Romeo 15 minutes exactly, so the area between this line and the diagonals means that Julia is waiting for Romeo 15 minutes or less.</p>
<p>Together, these two areas gives oll possible "time-combinatoins" at which they'll meet. </p>
<p>So now the probability $P(they \ meet) \ = \frac{the \ area \ of \ the \ part \ in \ which \ they \ meet}{the \ area \ of \ the \ square} = \frac{7}{16}$. </p>
<p><img src="https://i.stack.imgur.com/cxAMo.jpg" alt=""> </p>
|
1,279,873 | <p>Romeo and Juliet have a date at a given time, and each will arrive at the meeting place with a delay between 0 and 1 hour, with all pairs of delays being equally likely. The first to arrive will wait for 15 minutes and will leave if the other has not yet arrived. What is the probability that they will meet?</p>
<p>My text has the answer as 7/16, and I just don't get it. I'm just reading the book for self study - no one to ask!</p>
<p>My logic:</p>
<p>One of them has to arrive first, or they both arrive at the same time. The probability they arrive at the exact same time is zero. Suppose Romeo arrives first. </p>
<p>If Romeo is the first to arrive, and he arrives after 45min, they are guaranteed to meet. </p>
<p>A = P(Romeo arrives within the first 45min) = 45/60 = 3/4.<br>
B = P(Juliette arrives within some 15min interval) = 15/60 = 1/4.</p>
<p>P(A and B) = 3/4 * 1/4 = 3/16.</p>
<p>Help?</p>
| Ray | 472,812 | <p>Let the point <span class="math-container">$(x,y)$</span> mean that romeo arrives at <span class="math-container">$x$</span> time, juliet arrives at <span class="math-container">$y$</span> time. The sample space is the unit square since <span class="math-container">$0 \le x,y \le 1$</span>.</p>
<p>Let <span class="math-container">$A$</span> be the event that romeo and juliet meet, <span class="math-container">$A$</span> is the subset of points <span class="math-container">$(x,y)$</span> satisfying <span class="math-container">$|x-y| \le 0.25$</span> and lying inside the unit square. If <span class="math-container">$x\ge y$</span>, then <span class="math-container">$x \le y + 0.25 \implies y \ge x -0.25$</span>. If <span class="math-container">$y \ge x$</span> then <span class="math-container">$y \le x+0.25$</span>.</p>
<p>Thus event <span class="math-container">$A$</span> consists of all points in the unit square satisfying <span class="math-container">$y \ge x-0.25$</span> when <span class="math-container">$x\ge y$</span> and <span class="math-container">$y \le x +0.25$</span> when <span class="math-container">$y\ge x$</span>. Graphing this set of points and calculating its area (divided by area of unit square = 1) will yield the answer <span class="math-container">$7/16$</span>.</p>
|
2,396,561 | <blockquote>
<p>Solve the differential equation
$$x^2y''+xy'-y=\frac{x^2}{2+x}$$</p>
</blockquote>
<p>My attempt: put $x=e^z$ then $z=\log x$</p>
<p>and equation reduced to $\theta^2-1=\frac{e^{2z}}{2+e^z}$</p>
<p>so $y_c=c_1x+c_2\frac{1}{x}$ </p>
<p>how to find particular integral </p>
| Olivier Oloa | 118,798 | <p><strong>Hint</strong>. One may integrate by parts,
$$
\begin{align}
I_{n,a}&=\int_{-\infty}^\infty\frac{dx}{(1+x^2/a)^{n}}\qquad (n\ge1)
\\\\&=\int_{-\infty}^\infty 1 \cdot\frac{1}{(1+x^2/a)^{n}}\:dx
\\\\&=\left[ x\cdot \frac{1}{(1+x^2/a)^{n}}\right]_{-\infty}^\infty -\int_{-\infty}^\infty x\cdot \frac{-n\cdot\frac{2x}{a}}{(1+x^2/a)^{n+1}}\; dx
\\\\&=\color{red}{0}+2n\int_{-\infty}^\infty \frac{\frac{x^2}{a}}{(1+x^2/a)^{n+1}}\; dx
\\\\&=2n\int_{-\infty}^\infty \frac{1+\frac{x^2}{a}-1}{(1+x^2/a)^{n+1}}\; dx
\\\\&=2n\int_{-\infty}^\infty \frac{1}{(1+x^2/a)^{n}}\; dx-2n\int_{-\infty}^\infty \frac{1}{(1+x^2/a)^{n+1}}\; dx
\\\\&=2nI_{n,a}-2n I_{n+1,a}
\end{align}
$$ giving $(a)$.</p>
<p>One may apply the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow noreferrer">dominated convergence theorem</a> to obtain $(b)$.</p>
|
535,533 | <p>My confusion is how do we define : $\sin (x)$ for $x\in \mathbb{R}$.</p>
<p>I only know that $\sin(x)$ is defined for degrees and radians..</p>
<p>Suddenly, I have seen what is $\sin (2)$.. </p>
<p>I have no idea how to interpret this when not much information is given what $2$ is... </p>
<p>does this mean $2$ radians or $2$ degrees or some thing else...</p>
<p>I always wanted to clarify this but could not do it... </p>
<p>I guess most of the school students have this confusion.. </p>
<p>please help me to understand this... </p>
<p>Thank you....</p>
| ncmathsadist | 4,154 | <p>To completely specify the sine function, you must specify the unit of angular measure. It is most common in mathematical parlance to use radians. You are correct to be concerned about this ambiguity.</p>
|
1,857,196 | <p><strong>Question :</strong>
Let $f(x) = \sum^n_{k=0}c_kx^k$ be a polynomial function then prove that if $f(x) = 0$ for $n+1$ distinct real values, then every coefficient $c_k$ in $f(x)$ is $0$ , thus $f(x) = 0$ for all real values of $x$.</p>
<p><strong>What I think :</strong>
My problem is that I have no other original thought as to how come a polynomial function $f(x)$ of degree $n$ have more than $n$ values that satisfy $f(x) = 0$; if we factorize any $n$ degree polynomial we would get something of the form $(x-a_1)(x-a_2)........(x-a_n) = 0$ so we get here $n$ values again if we see graphically, we find that the $f(x)$ curve would meet the $Y$ axis $n$ times.</p>
<p>So does the question mean that $f(x)$ does not exist when it says that all the coefficients in $f(x)$ would be zero if $n +1$ real distinct values satisfy $f(x) = 0$ or does it mean something else and how do I prove it ?</p>
| Robert Israel | 8,508 | <p>It does exist, it's just $0$. Your factorization is almost right, but you forgot the leading coefficient: it should be $f(x) = c_n (x - a_1) \ldots (x - a_n)$ (where $a_1, \ldots, a_n$ are the first $n$ of those $n+1$ values). Your job is to show that the polynomial has this form, and that $c_n = 0$.</p>
|
3,806,122 | <p>I tried using Chinese remainder theorem but I kept getting 19 instead of 9.</p>
<p>Here are my steps</p>
<p><span class="math-container">$$
\begin{split}
M &= 88 = 8 \times 11 \\
x_1 &= 123^{456}\equiv 2^{456} \equiv 2^{6} \equiv 64 \equiv 9 \pmod{11} \\
y_1 &= 9^{-1} \equiv 9^9 \equiv (-2)^9 \equiv -512 \equiv -6 \equiv 5 \pmod{11}\\
x_2 &= 123^{456} \equiv 123^0 \equiv 1 \pmod{8}\\
y_2 &= 1^{-1} \equiv 1 \pmod{8} \\
123^{456}
&\equiv \sum_{i=1}^2 x_i\times\frac{M}{m_i} \times y_i
\equiv 9\times\frac{88}{11}\times5 + 1\times\frac{88}{8} \times1 \equiv 371
\equiv 19 \pmod{88}
\end{split}
$$</span></p>
| TheSilverDoe | 594,484 | <p>Modulo <span class="math-container">$88$</span> one has <span class="math-container">$$123^{456} = 35^{456} = (35^2)^{228} = (-7)^{228} = ((-7)^6)^{38} = (-7)^{38} = ((-7)^6)^6 \times 49 = (-7)^6 \times 49 = -7 \times 49 = -343 = 9 \quad [88]$$</span></p>
|
3,525,621 | <p>Find all integral solutions to the equation <span class="math-container">$x^2 + 4xy - y^2 = m$</span> with <span class="math-container">$-5 \leq m \leq 10$</span>.</p>
<p>I know that I can set <span class="math-container">$m = -5$</span> to <span class="math-container">$m = 10$</span> and solve all of the equations independently. But is there any better method to this question?</p>
| Z Ahmed | 671,540 | <p><span class="math-container">$$\frac{1}{x}[\log(1+x)-\log(1-x)]=2 \sum_{k=0}^{\infty} \frac{x^{2k}}{2k+1}.$$</span></p>
<p><span class="math-container">$$I=\sum_{k=0}^{\infty} \int_{0}^{1} \frac{2x^{2k}}{2k+1} dx = \sum_{k=0}^{\infty} \frac{2}{(2k+1)^2}=\frac{\pi^2}{4}.$$</span></p>
|
713,104 | <p>Are there any combinatorial games whose order (in the usual addition of combinatorial games) is finite but neither $1$ nor $2$?</p>
<p>Finding examples of games of order $2$ is easy (for example any impartial game), but I have not been able to think up an example with finite order where the order did not come from some sort of symmetry (for example even though Domineering is not impartial, it is easy to see that any square board will give a game of order $1$ or $2$), and such a symmetry only gives $1$ or $2$ as the possible orders.</p>
| Guy | 127,574 | <p>$$f(n)=\sum_{i=1}^n \frac{i(i+1)}{2}$$</p>
<p>$$f(n)=\sum_{i=1}^n \frac{i^2}2 +\frac{i}2$$</p>
<p>Using two well known identitites,</p>
<p>$$f(n)=\frac12\left(\frac{n(n+1)(2n+1)}{6}+\frac{n(n+1)}{2}\right)$$</p>
<p>Simplifying:</p>
<p>$$f(n)=\frac{n(n+1)}{4}\left(\frac{2n+1}{3}+1\right)$$</p>
<p>$$f(n)=\frac{n(n+1)}{4}\left(\frac{2n+4}{3}\right)$$</p>
<p>$$f(n)=\frac{n(n+1)}{2}\left(\frac{n+2}{3}\right)$$</p>
<p>$$f(n)=\frac{n(n+1)(n+2)}{6}$$</p>
|
163,589 | <p>The tensor product of some (finite dimensional real) vector spaces is acted on by the direct product of their general linear groups. I would like to know if there are explicit invariants in the case of 3 vector spaces. For one vector space there are two orbits: 0 vector, and non-zero vector. For two vector spaces, $T\in U\otimes V \cong Hom(U^*,V)$ there are finitely many orbits characterized by $rank(T)$. For 3 vector spaces the dimension of $U\otimes V\otimes W$ is $uvw$ and the dimension of $GL(U)\times GL(V) \times GL(W)$ is $u^2+v^2+w^2$ so that usually the space of orbits has positive dimension. Any references would be most welcome. I am particularly interested in the case U,V have dimension 4 and W has dimension 8.</p>
| Ben McKay | 13,268 | <p>In the world of exterior differential systems, an element of a triple tensor product is called a <em>tableau</em>. The known invariants of tableaux are complicated; see the book <strong>Exterior Differential Systems</strong> by Bryant, Chern, Gardner, Goldschmidt and Griffiths. There is no classification of tableaux. </p>
|
3,244,649 | <ul>
<li>Show the set <span class="math-container">$A=\{(m,n)\in N\times N : m\leq n\}$</span> is countably infinite.</li>
</ul>
<p>If <span class="math-container">$A$</span> is countable then we need to show that there is a bijection between <span class="math-container">$A$</span> and <span class="math-container">$\mathbb{N}$</span>, but how can I show <span class="math-container">$A$</span> is countably infinite? </p>
<p>Thanks...</p>
| Kitter Catter | 166,001 | <p>I'll throw in a couple of approaches:</p>
<h2>Quick Approach</h2>
<p>If anything goes in terms of premises then we know a couple of things:</p>
<ol>
<li>Subsets of countable sets are countable</li>
<li><span class="math-container">$\mathbb{N}\times\mathbb{N}$</span> is countable</li>
<li><span class="math-container">$A$</span> is a subset of <span class="math-container">$\mathbb{N}\times\mathbb{N}$</span></li>
<li><span class="math-container">$A$</span> is infinite, see Guido's answer.</li>
</ol>
<h2>Bijective Approach</h2>
<p>While this isn't necessary to show that <span class="math-container">$A$</span> is countably infinite, but there is a mapping between <span class="math-container">$N$</span> and <span class="math-container">$A$</span>. Namely for some <span class="math-container">$(m,n) \in A, x\in\mathbb{N}$</span> then <span class="math-container">$(m,n) = (\lceil\frac{\sqrt{8 x+1}-1}{2}\rceil-x+1,\lceil\frac{\sqrt{8 x+1}-1}{2}\rceil+1)$</span> and of course the other way is <span class="math-container">$x=\frac{n(n-1)}{2}+m$</span></p>
|
3,562,294 | <p>I got <span class="math-container">$x^2+y^2$</span> could factorized by <span class="math-container">$(x+yi)(x-yi)$</span></p>
<p>But Could we get factorization of <span class="math-container">$x^2+y^2+1$</span></p>
<p>I tried <span class="math-container">$(x+yi+i)(x-yi-i)$</span> but i couldn't guess it.</p>
<p>By FTA, It is possible but I couldn't gess it....</p>
| P. Lawrence | 545,558 | <p>Look at the homogenization of your polynomial, viz. <span class="math-container">$x^2+y^2+z^2$</span>. Finding a non-trivial factorisation <span class="math-container">$$x^2+y^2+z^2=(a'x+b'y+c'z)(a''x+b''y+c''z)$$</span> with complex coefficients is equivalent to decomposing the conic <span class="math-container">$x^2+y^2+z^2=0$</span> in tthe complex projective plane as the union of two lines. There is a general result that a conic
<span class="math-container">$$\begin{bmatrix}x&y&z\end{bmatrix}\begin{bmatrix}a&h&g\\h&b&f\\g&f&c\end{bmatrix}\begin{bmatrix}x\\y\\z\end{bmatrix}=[0]$$</span> in the complex projective plane is the union of two lines iff <span class="math-container">$$det\begin{bmatrix}a&h&g\\h&b&f\\g&f&c\end{bmatrix}=0.$$</span>[Note that the proof of this result is highly dependent on the fact that the characteristic of the complex numbers is not 2,] For your polynomial, <span class="math-container">$a=b=c=1,h=g=f=0,$</span> so the detrminant is 1 and the conic does not decompose, i.e. the polynomial does not have a non-trivial factorisation.</p>
|
4,000,062 | <p>A sequence <span class="math-container">$\left\{a_n\right\}$</span> is defined as <span class="math-container">$a_n=a_{n-1}+2a_{n-2}-a_{n-3}$</span> and <span class="math-container">$a_1=a_2=\frac{a_3}{3}=1$</span></p>
<blockquote>
<p>Find the value of <span class="math-container">$$a_1+\frac{a_2}{2}+\frac{a_3}{2^2}+\cdots\infty$$</span></p>
</blockquote>
<p>I actually tried this using difference equation method.Let the solution be of the form <span class="math-container">$a_n=\lambda^n$</span>
<span class="math-container">$$\lambda^n=\lambda^{n-1}+2\lambda^{n-2}-\lambda^{n-3}$$</span> which gives the cubic equation <span class="math-container">$\lambda^3-\lambda^2-2\lambda+1=0$</span>. But i am not able to find the roots manually.</p>
| Hari Shankar | 351,559 | <p>Let <span class="math-container">$S=a_1+\dfrac{a_2}{2}+\dfrac{a_3}{2^2}+\dfrac{a_4}{2^3}+\ldots $</span></p>
<p>Then <span class="math-container">$\dfrac{S}{2} = \dfrac{a_1}{2}+\dfrac{a_2}{2^2}+\dfrac{a_3}{2^3}+ \ldots$</span></p>
<p>Subtracting we get</p>
<p><span class="math-container">$\dfrac{S}{2} = a_1 +\dfrac{a_2-a_1}{2}+\dfrac{a_3-a_2}{2^2}+\dfrac{a_4-a_3}{2^3}+ \ldots$</span></p>
<p>Now <span class="math-container">$a_4-a_3 = 2a_2-a_1, a_5-a_4=2a_3-a_2$</span> etc.</p>
<p>So <span class="math-container">$\dfrac{S}{2} = 1 +\dfrac{1-1}{2}+\dfrac{3-1}{2^2}+\dfrac{2a_2-a_1}{2^3}+ \dfrac{2a_3-a_2}{2^4}+\ldots $</span></p>
<p><span class="math-container">$=1+\dfrac{1}{2}-\dfrac{1}{8}+3\left(\dfrac{a_2}{2^4}+\dfrac{a_3}{2^5}+\ldots \right)$</span></p>
<p><span class="math-container">$=\dfrac{11}{8}+\dfrac{3}{8} (S-1) =1+\dfrac{3S}{8}$</span></p>
<p><span class="math-container">$ \Rightarrow \dfrac{S}{8} = 1 $</span> so that <span class="math-container">$S=8$</span></p>
|
378,735 | <p>Let $A \in M_n(\mathbb C)$. Let $\langle \; \cdot\; , \; \cdot\; \rangle$ be the standard inner product in $ \mathbb C^n$, viewed either as row vectors or as column vectors.</p>
<p>Let $r_j$ be the $j$-th row of A, and let $c_j$ be the $j$-th column of $A$.</p>
<p>Show that A is normal, if and only if $\langle r_i,r_j\rangle$ = $\langle c_j,c_i \rangle$, for all $i$, $j$, $1 \le i$, $j \le n$.</p>
<hr>
<p>I do know that $A$ is normal iff $AA^*$ = $A^*A$.
But how can I evaluate the ij-th component of those two equal matrices?</p>
<p>Thanks in advance.</p>
| Ma Ming | 16,340 | <p>Hint: $A A^*=(\langle r_i,r_j\rangle )$ (Recall the definition of matrix multiplication).</p>
|
4,286,296 | <p>I am trying to prove the following claim:</p>
<blockquote>
<p>Let <span class="math-container">$ 0\leq n \in \Bbb Z$</span> and suppose that there exists a <span class="math-container">$k \in \Bbb Z$</span> such that <span class="math-container">$n=4k+3$</span>.
Prove or disprove: <span class="math-container">$\sqrt n \notin \Bbb Q$</span> .</p>
</blockquote>
<p>The problem I am having is that I am trying to assume by contradiction that <span class="math-container">$\sqrt n \in \Bbb Q$</span> and then I say that there are <span class="math-container">$a,b \in \Bbb Z$</span> such that <span class="math-container">$n=\sqrt {4k+3}=\frac ab$</span>. I finally get to a point where <span class="math-container">$k=\frac {a^2-3b^2}{4b^2}$</span>. Yet I can't find any <span class="math-container">$a,b \in \Bbb Z$</span> that will help me show that the claim is false, nor show a contradiction that will cause the claim to be true.
Any help will be welcomed.</p>
| Lazy | 958,820 | <p>Note the if <span class="math-container">$q$</span> is rational but not integral then <span class="math-container">$q^2$</span> will also not be integral. So you only need to prove that <span class="math-container">$n$</span> is not a power of an integer.</p>
<p>Note that if <span class="math-container">$m=2l$</span> then <span class="math-container">$m^2=4l^2$</span>, if <span class="math-container">$m=4l+1$</span> then <span class="math-container">$m^2=16l^2+4l+1$</span> and if <span class="math-container">$m=4l+3$</span> then <span class="math-container">$m^2 = 16l^2+12l + 8 +1$</span>. So for any <span class="math-container">$m$</span> we have that <span class="math-container">$m^2$</span> is either of the form <span class="math-container">$4r$</span> or of the form <span class="math-container">$4r+1$</span>.</p>
<p>In different words: Modulo <span class="math-container">$4$</span> we have <span class="math-container">$0^2=0,1^2=1,2^2=0,3^2=1$</span>.</p>
<p>So any number of the form <span class="math-container">$4k+3$</span> is not a square of an integer.</p>
|
4,286,296 | <p>I am trying to prove the following claim:</p>
<blockquote>
<p>Let <span class="math-container">$ 0\leq n \in \Bbb Z$</span> and suppose that there exists a <span class="math-container">$k \in \Bbb Z$</span> such that <span class="math-container">$n=4k+3$</span>.
Prove or disprove: <span class="math-container">$\sqrt n \notin \Bbb Q$</span> .</p>
</blockquote>
<p>The problem I am having is that I am trying to assume by contradiction that <span class="math-container">$\sqrt n \in \Bbb Q$</span> and then I say that there are <span class="math-container">$a,b \in \Bbb Z$</span> such that <span class="math-container">$n=\sqrt {4k+3}=\frac ab$</span>. I finally get to a point where <span class="math-container">$k=\frac {a^2-3b^2}{4b^2}$</span>. Yet I can't find any <span class="math-container">$a,b \in \Bbb Z$</span> that will help me show that the claim is false, nor show a contradiction that will cause the claim to be true.
Any help will be welcomed.</p>
| MAHI | 821,955 | <p><span class="math-container">$\sqrt n=\sqrt{4k+3}$</span> is a solution of the equation <span class="math-container">$$x^2 -(4k+3)=0$$</span></p>
<p>Now by <strong>Rational Zeros Theorem</strong> <em><span class="math-container">$\biggr($</span>Which states that : <span class="math-container">$x=p/q$</span> is a Rational Number satisfying the polynomial equation <span class="math-container">$\sum_{r=0}^n a_rx^r $</span> then <span class="math-container">$q(\neq0)|a_n$</span> and <span class="math-container">$p|a_0$</span> and gcd(p,q)=1 <span class="math-container">$\biggr)$</span></em> , the only Rational roots of the equation are <span class="math-container">$±1,±(4k+3),±n|(4k+3) \forall k\in \Bbb Z^+$</span> where <span class="math-container">$n \in \Bbb Z^+$</span>. But <span class="math-container">$\sqrt{4k+3}$</span> is neither of them . So <span class="math-container">$\sqrt n=\sqrt{4k+3}$</span> is irrational.</p>
|
20,784 | <p>I have a set of 3-space coordinates for the atoms of a molecule (I could also transform them into spheres with radii corresponding to the atoms they represent). I would like to place this molecule into the tightest possible 3D rectangular bounding box and determine the coordinates for the box vertices. Is there a graphics processing routine in Mathematica that will do this automatically for me, or do I need to implement an algorithm from the literature?</p>
<p>To be more specific - I would like to obtain the minimum bounding box for my molecule allowing rotation etc. I was able to find a solution for the special case of the molecule I'm interested in using a sort of lame trick with a rotational symmetry. I'm asking this question because I'd like to have a general-case solution.</p>
<p>One way to proceed would be for me to perform a set of random rotations, calculate the volume of the bounding box Mathematica generates, and keep going until I obtain a sufficiently tight fit. Is there a way to get the coordinates for the Graphics3D bounding box vertices?</p>
| Xerxes | 5,406 | <p>Assuming you're not trying to rotate the molecule to minimize some parameter (volume? diagonal?) of the bounding box, take a set of positions and radii:</p>
<pre><code>pos = RandomReal[{0, 10}, {10, 3}];
rad = RandomReal[{1, 2}, {10}];
</code></pre>
<p>Add and subtract a radius from each position and dimension, then take the bounds:</p>
<pre><code>box = {Min[#], Max[#]}& /@ Transpose[Join @@ MapThread[{#1 + #2, #1 - #2}&, {pos, rad}]];
</code></pre>
<p>Here it is:</p>
<pre><code>Graphics3D[{Gray, Sphere[pos, rad]},
PlotRange -> box, PlotRangePadding -> None,
Lighting -> "Neutral"]
</code></pre>
<p><img src="https://i.stack.imgur.com/zd13i.png" alt="Molecule in a box"></p>
|
1,643,013 | <p>I have just started learning about differential equations, as a result I started to think about this question but couldn't get anywhere. So I googled and wasn't able to find any particularly helpful results. I am more interested in the reason or method rather than the actual answer. Also I do not know if there even is a solution to this but if there isn't I am just as interested to hear why not.</p>
<p>Is there a solution to the differential equation:</p>
<p>$$f(x)=\sum_{n=1}^\infty f^{(n)}(x)$$</p>
| Konstantinos Gaitanas | 99,437 | <p>Differentiate both sides to get $f'(x)=f''(x)+f^{(3)}(x)+...$ </p>
<p>So the starting equation becomes $f(x)=f'(x)+f'(x)\Rightarrow f(x)=2f'(x)$ </p>
<p>Multiply now both sides by $e^{-\frac{x}{2}}$ and this becomes<br>
$$[e^{-\frac{x}{2}}f(x)]'=0$$<br>
So $f(x)=ce^{\frac{x}{2}}$<br>
Done.</p>
|
239,720 | <p>If $A$ is unital C$^*$-algebra, is it true that the multiplier algebra of $A \otimes \mathcal{K} $ is $ A \otimes \mathcal{B}(\mathcal{H})$? Where $\mathcal{K}$ is C$^*$-algebra of compact operators on the Hilbert space $\mathcal{H}$.</p>
| vap | 89,722 | <p>If $A=C_0(X)$ and $B$ is a $C^\ast$-algebra then $M(A\otimes B)$ is the set of strictly continuous functions $\beta X\to M(B)$, where $\beta$ stands for Stone-Čech compactification. </p>
<p>If you take $X$ to be compact and $B=\mathcal{K}$ then we are in your setting. </p>
<p>But $C(X)\otimes\mathcal{B(H)}$ is the set of norm-continuos functions $\beta X=X\to\mathcal{B(H)}$. The strict topology is the $\sigma$-strong-$^\ast$ topology on $\mathcal{B(H)}$, which is different form the norm topology. This should answer your question in the negative.</p>
|
1,867,401 | <p>I refer to this derivation of the gradient in polar coordinates: <a href="http://www.math.jhu.edu/~js/Math202/polar.grad.chain.pdf" rel="nofollow">http://www.math.jhu.edu/~js/Math202/polar.grad.chain.pdf</a></p>
<p>I can understand all parts except why the unit gradient $$\hat{e_r}=\langle\cos\theta,\sin\theta\rangle$$
$$\hat{e_\theta}=\langle-\sin\theta,\cos\theta\rangle$$</p>
<p>are defined as such?</p>
<p>I can hazard a guess due to $x=r\cos\theta$, $y=r\sin\theta$, so when $r=1$, that possibly leads to $\hat{e_r}$?
And $\hat{e_\theta}$ has to be orthogonal, but why not $\langle \sin\theta, -\cos\theta\rangle$?</p>
<p>Any other explanation?</p>
<p>Thanks for any help.</p>
| Rui Liu | 459,936 | <p>I was thinking about the same question recently. I think now I get a rather satisfactory answer, I will just post it here.</p>
<hr>
<p>After some search I think I can now see things a bit clearer. I will try to clarify my thoughts and do a summary.</p>
<p>Initially I was surprised by the fact that gradient in non-Cartesian coordinates has a different formula. For example, gradient in 2D polar coordinates is:</p>
<p><span class="math-container">$$
\nabla f = \frac{\partial f}{\partial r} \mathbf{\hat{e}}_r + \frac{1}{r}\frac{\partial f}{\partial \theta} \mathbf{\hat{e}}_\theta
$$</span></p>
<p>Searching on that problem led me to <a href="https://en.wikipedia.org/wiki/Curvilinear_coordinates" rel="nofollow noreferrer">curvilinear coordinates</a>, where <span class="math-container">$d\mathbf{r}$</span> is treated and manipulated like a differential form. However, that's not what I thought what differential form is, as differential form is defined to be a <strong>real</strong> valued function. I suspected there's a corresponding 'vector-valued' differential form definition, that was why I asked <a href="https://math.stackexchange.com/questions/3275275/vector-differential-rigourous-definition">this question</a>.</p>
<p>However thinking about this a bit more, this <strong>shouldn't</strong> be the case. <span class="math-container">$d\mathbf{r}$</span> is used in line integral, which is written as</p>
<p><span class="math-container">$$
\oint_\gamma f \cdot d\mathbf{r}
$$</span></p>
<p>If we use differential form to interpret the integral, the integral is equivalent to</p>
<p><span class="math-container">$$
\oint_\gamma f_x dx + f_y dy
$$</span></p>
<p>Here apparently <span class="math-container">$f_x dx + f_y dy$</span> is the differential form. So if there is a differential form, it should be <span class="math-container">$f \cdot d\mathbf{r}$</span> not <span class="math-container">$d\mathbf{r}$</span>.</p>
<hr>
<p>So back to the original question on the gradient in non-Cartesian coordinates. I found an alternative way to prove the result in the framework of differential geometry without manipulating <span class="math-container">$d\mathbf{r}$</span> directly.</p>
<p>First it's important to note why gradient behaves a bit non-intuitively in non-Cartesian coordinates. Although gradient looks like the differential (total derivative) in the form of <span class="math-container">$\begin{pmatrix}\frac{\partial f}{\partial x} \\ \frac{\partial f}{\partial y}\end{pmatrix}$</span>, it's not differential. At point <span class="math-container">$p$</span>, it is defined as</p>
<p><span class="math-container">$$
g(\nabla f, \mathbf{v}) = df(\mathbf{v})
$$</span></p>
<p>where <span class="math-container">$g$</span> is the inner product on the tangent space <span class="math-container">$T_p$</span>, and <span class="math-container">$df$</span> is the one form (or total derivative, or differential, there're so many names...), which eats a vector and results in a real number as usual. The important thing to note is the involvement of the inner product. Therefore it is reasonable that <strong>the gradient is dependent on the inner product</strong>. Thus if the inner product takes the simplest form in Cartesian coordinates, and more complicated form in other coordinate systems, then it makes sense to have gradient with more complicated form in non-Cartesian coordinate system.</p>
<p>The above equation means <span class="math-container">$g$</span> can also be viewed as a function of <span class="math-container">$T_p \to T_p^*$</span> that eats a tangent vector <span class="math-container">$\nabla f$</span> and results in a one form <span class="math-container">$df$</span>. Now assume the coordinate system <span class="math-container">$q_i$</span> has orthogonal basis at point <span class="math-container">$p$</span>, then <span class="math-container">$g(\frac{\partial}{\partial q_i}, \frac{\partial}{\partial q_j}) = 0$</span> and <span class="math-container">$g(\frac{\partial}{\partial q_i}, \frac{\partial}{\partial q_i}) = \lvert \frac{\partial}{\partial q_i} \rvert^2 = h_i^2$</span>. Given a tangent vector <span class="math-container">$\sum a_i\frac{\partial}{\partial q_i}$</span></p>
<p><span class="math-container">$$
\begin{aligned}
g(\sum a_i\frac{\partial}{\partial q_i}) &= \sum h_i^2a_idq_i
\end{aligned}
$$</span></p>
<p>which is a one form. </p>
<p>Another thing is that we want express <span class="math-container">$\nabla f$</span> to using <strong>orthonormal</strong> vector basis, rather than just orthogonal basis. The orthonormal basis is simply <span class="math-container">$\frac{1}{h_i} \frac{\partial}{\partial q_i}$</span>, put it into previous equation we get</p>
<p><span class="math-container">$$
\begin{aligned}
g(\sum \frac{1}{h_i}\frac{\partial}{\partial q_i}) &= \sum h_idq_i
\end{aligned}
$$</span></p>
<p>Now since <span class="math-container">$df_p = \sum \frac{\partial f}{\partial q_i}dq_i$</span>, to find the gradient we need to find <span class="math-container">$a_i$</span> such that</p>
<p><span class="math-container">$$
\begin{aligned}
g(\sum a_i\frac{1}{h_i}\frac{\partial}{\partial q_i}) &= \sum \frac{\partial f}{\partial q_i}dq_i
\end{aligned}
$$</span></p>
<p>Therefore <span class="math-container">$a_i = \frac{1}{h_i}\frac{\partial f}{\partial q_i}$</span>, hence we get the result</p>
<p><span class="math-container">$$
\begin{aligned}
\nabla f = \sum \frac{1}{h_i} \frac{\partial f}{\partial q_i} \mathbf{\hat{e}}_i
\end{aligned}
$$</span></p>
<hr>
<p>Now to calculate gradient in an orthogonal coordinates, we need to calculate <span class="math-container">$h_i$</span>. Here we relate the coordinate basis to Cartesian basis, where <span class="math-container">$\lvert \frac{\partial}{\partial x_i} \rvert = 1$</span>.</p>
<p><span class="math-container">$$
\begin{aligned}
h_i
&= \lvert \frac{\partial}{\partial q_i} \rvert \\
&= \lvert \sum_j \frac{\partial x_j}{\partial q_i} \frac{\partial}{\partial x_i} \rvert
\end{aligned}
$$</span></p>
<p>Consider polar coordinate system, where <span class="math-container">$x = r\cos \theta$</span> and <span class="math-container">$y = r\sin \theta$</span>. We get</p>
<p><span class="math-container">$$
\begin{aligned}
\frac{\partial x}{\partial r} &= \cos \theta \\
\frac{\partial y}{\partial r} &= \sin \theta
\end{aligned}
$$</span></p>
<p>Therefore <span class="math-container">$h_r = \lvert \cos \theta \frac{\partial}{\partial x} + \sin \theta \frac{\partial}{\partial y} \rvert = 1$</span>.</p>
<p><span class="math-container">$$
\begin{aligned}
\frac{\partial x}{\partial \theta} &= -r\sin \theta \\
\frac{\partial y}{\partial \theta} &= r\cos \theta
\end{aligned}
$$</span></p>
<p>Therefore <span class="math-container">$h_\theta = \lvert r\cos \theta \frac{\partial}{\partial x} - r\sin \theta \frac{\partial}{\partial y} \rvert = r$</span>.</p>
<p>Now we get gradient in polar coordinates</p>
<p><span class="math-container">$$
\nabla f = \frac{\partial f}{\partial r} \mathbf{\hat{e}}_r + \frac{1}{r}\frac{\partial f}{\partial \theta} \mathbf{\hat{e}}_\theta
$$</span></p>
<hr>
<p>Some good references that helped me:</p>
<p><a href="https://math.stackexchange.com/questions/47618/definition-of-the-gradient-for-non-cartesian-coordinates">Definition of the gradient for non-Cartesian coordinates</a></p>
<p><a href="https://www.math.arizona.edu/~faris/methodsweb/manifold.pdf" rel="nofollow noreferrer">https://www.math.arizona.edu/~faris/methodsweb/manifold.pdf</a></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.