title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Preserving the extrema of one function after applying another
|
Suppose we have some function $f(x)$ with local extrema at $x_1, x_2, \dots$, and a second function $g(x)$ which is continuous, strictly increasing and non-zero everywhere over the range of the $x_i$. Will $g(f(x))$ have its local extrema at the same $x_i$ and no others? If so, are there any obvious loosenings of the constraints on $g$ for which this will remain true? (I'm really thinking of this in the context of signal processing, looking at transformations that preserve the visual structure of an image, but it seems like a general question that must have been trivially proved by someone 250 years ago...)
|
You don't need to assume that $g$ is non-zero, and it could be strictly decreasing as well. Furthermore, the conditions on $g$ only need to hold on the image of $f$ (which doesn't need to the be whole of $\mathbb{R}$, for example). On the other hand, if $g$ has a local extremum at $y=f(z)$ and $f$ is strictly increasing around $z$, then you're obviously in trouble, because $g\circ f$ will have a local extremum at $z$. But having no local extrema is equivalent to being strictly monotonic. The only that might relax the conition on $g$ is that it has local extrema exactly where $f$ does and they "cancel each other out" or "amplify each other".
|
|calculus|signal-processing|
| 1
|
If and only if, which direction is which?
|
I can never figure out (because the English language is imprecise) which part of "if and only if" means which implication. ($A$ if and only if $B$) = $(A \iff B)$, but is the following correct: ($A$ only if $B$) = $(A \implies B)$ ($A$ if $B$) = $(A \impliedby B)$ The trouble is, one never comes into contact with "$A$ if $B$" or "$A$ only if $B$" using those constructions in everyday common speech.
|
This example may be more clear, because apples ⊂ fruits is more obvious: "This is an apple if it is a fruit" is false. "This is an apple only if it is a fruit" is true. "This is a fruit if it is an apple" is true. "This is a fruit only if it is an apple" is false. A is an apple => A is a fruit
|
|terminology|logic|
| 0
|
Useful examples of pathological functions
|
What are some particularly well-known functions that exhibit pathological behavior at or near at least one value and are particularly useful as examples? For instance, if $f'(a) = b$, then $f(a)$ exists, $f$ is continuous at $a$, $f$ is differentiable at $a$, but $f'$ need not be continuous at $a$. A function for which this is true is $f(x) = x^2 \sin(1/x)$ at $x=0$.
|
The function f(x) = x over the rationals and 2x over the irrationals is locally increasing in 0 but it is neither increasing nor decreasing.
|
|soft-question|big-list|calculus|real-analysis|
| 0
|
Useful examples of pathological functions
|
What are some particularly well-known functions that exhibit pathological behavior at or near at least one value and are particularly useful as examples? For instance, if $f'(a) = b$, then $f(a)$ exists, $f$ is continuous at $a$, $f$ is differentiable at $a$, but $f'$ need not be continuous at $a$. A function for which this is true is $f(x) = x^2 \sin(1/x)$ at $x=0$.
|
Also: Conway base 13 function . This function has the following properties. 1. On every closed interval $[a, b]$ , it takes every real value. 2. It is continuous nowhere.
|
|soft-question|big-list|calculus|real-analysis|
| 0
|
Useful examples of pathological functions
|
What are some particularly well-known functions that exhibit pathological behavior at or near at least one value and are particularly useful as examples? For instance, if $f'(a) = b$, then $f(a)$ exists, $f$ is continuous at $a$, $f$ is differentiable at $a$, but $f'$ need not be continuous at $a$. A function for which this is true is $f(x) = x^2 \sin(1/x)$ at $x=0$.
|
Have also a look here: https://mathoverflow.net/questions/22189/what-is-your-favorite-strange-function
|
|soft-question|big-list|calculus|real-analysis|
| 1
|
Conjectures that have been disproved with extremely large counterexamples?
|
I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture. I'm sure that everyone here is familiar with it; it describes an operation on a natural number – $n/2$ if it is even, $3n+1$ if it is odd. The conjecture states that if this operation is repeated, all numbers will eventually wind up at $1$ (or rather, in an infinite loop of $1-4-2-1-4-2-1$ ). I fired up Python and ran a quick test on this for all numbers up to $5.76 \times 10^{18}$ (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at $1$ . Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.) I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?" To which I said, "No, you
|
Further counterexamples can be found here: https://mathoverflow.net/questions/15444/the-phenomena-of-eventual-counterexamples
|
|big-list|conjectures|big-numbers|
| 0
|
Tetrahedron volume
|
How to calculate volume of tetrahedron given lengths of all it's edges?
|
Cayley-Menger Determinant - A generalization of Herons Formula.
|
|geometry|polyhedra|
| 1
|
Natural derivation of the complex exponential function?
|
Bourbaki shows in a very natural way that every continuous group isomorphism of the additive reals to the positive multiplicative reals is determined by its value at $1$, and in fact, that every such isomorphism is of the form $f_a(x)=a^x$ for $a>0$ and $a\neq 1$. We get the standard real exponential (where $a=e$) when we notice that for any $f_a$, $(f_a)'=g(a)f_a$ where $g$ is a continuous group isomorphism from the positive multiplicative reals to the additive reals. By the intermediate value theorem, there exists some positive real $e$ such that $g(e)=1$ (by our earlier classification of continuous group homomorphisms, we notice that $g$ is in fact the natural log). Notice that every deduction above follows from a natural question. We never need to guess anything to proceed. Is there any natural way like the above to derive the complex exponential? The only way I've seen it derived is as follows: Derive the real exponential by some method (inverse function to the natural log, which
|
So, what's unnatural about the complex differential equation... f : C -> C satisfying f'(z) = f(z) and f(0)=1 ?
|
|complex-analysis|real-analysis|general-topology|
| 0
|
Natural derivation of the complex exponential function?
|
Bourbaki shows in a very natural way that every continuous group isomorphism of the additive reals to the positive multiplicative reals is determined by its value at $1$, and in fact, that every such isomorphism is of the form $f_a(x)=a^x$ for $a>0$ and $a\neq 1$. We get the standard real exponential (where $a=e$) when we notice that for any $f_a$, $(f_a)'=g(a)f_a$ where $g$ is a continuous group isomorphism from the positive multiplicative reals to the additive reals. By the intermediate value theorem, there exists some positive real $e$ such that $g(e)=1$ (by our earlier classification of continuous group homomorphisms, we notice that $g$ is in fact the natural log). Notice that every deduction above follows from a natural question. We never need to guess anything to proceed. Is there any natural way like the above to derive the complex exponential? The only way I've seen it derived is as follows: Derive the real exponential by some method (inverse function to the natural log, which
|
Let f(x) = cos(x) + i*sin(x) Then df/dx = -sin(x) + i*cos(x) = i*f(x) So that ∫(1/f(x)) df = ∫i dx ln(f(x)) = ix + C f(x) = e^(ix + C) = cos(x) + i*sin(x) Since f(0) = 1 , C = 0 , so e^(ix) = cos(x) + i*sin(x) (I will update this with LaTeX when that functionality becomes available)
|
|complex-analysis|real-analysis|general-topology|
| 0
|
A definition of Conway base-$13$ function
|
Can you give a definition of the Conway base-$13$ function better than the one actually presented on wikipedia , which isn't clear? Maybe with some examples?
|
The idea of the Conway base-13 number is to find a function that is not continuous, yet if f(a) , then there is some c between a and b with f(c)=x . This a counterexample to the converse of the intermediate value theorem . The function is defined by encoding base-10 values in the tail (the digits left after skipping a finite number). We use + , - , . and the digits to represent an encoded number in the tail and require the encoded number to start with a + or - . In base 10, every number ending in an infinite number of 9 s can be rewritten to end in an infinite number of 0 s instead (ie. 0.999...=1.0). Similarly, we decide we will rewrite each Conway number ending in an infinite number of + , to ensure that each real number has a unique decimal representation. Each number can have up to one base-10 encoded value, which is the result of applying Conway's Base 13 function if it exists. If no such encoding exists for x(ie. + occurring infinite times in the expansion), then we define f(x)=0
|
|calculus|number-theory|definition|
| 0
|
A definition of Conway base-$13$ function
|
Can you give a definition of the Conway base-$13$ function better than the one actually presented on wikipedia , which isn't clear? Maybe with some examples?
|
I understand why the Wikipedia article uses the notation it does, but I find it annoying. Here is a transliteration, with some elaboration. Expand x ∈ (0,1) in base 13, using digits {0, 1, ... , 9, d , m , p } --- using the convention d = 10, m = 11, p = 12. N.B. for rational numbers whose most reduced expression a/b is such that b is a power of 13, there are two such expansions: a terminating expansion, and a non-terminating one ending in repeated p digits. In such a case, use the terminating expansion. Let S ⊂ (0,1) be the set of reals whose expansion involves finitely many p , m , and d digits, such that the final d digit occurs after the final p digit and the final m digit. (We may require that there be at least one digit 0--9 between the final p or m digit and the final d digit, but this does not seem to be necessary.) Then, every x ∈ S has a base 13 expansion of the form 0. x 1 x 2 ... x n [ p or m ] a 1 a 2 ... a k [ d ] b 1 b 2 ... for some digits x j ∈ {0, ... , p } and where
|
|calculus|number-theory|definition|
| 0
|
Natural derivation of the complex exponential function?
|
Bourbaki shows in a very natural way that every continuous group isomorphism of the additive reals to the positive multiplicative reals is determined by its value at $1$, and in fact, that every such isomorphism is of the form $f_a(x)=a^x$ for $a>0$ and $a\neq 1$. We get the standard real exponential (where $a=e$) when we notice that for any $f_a$, $(f_a)'=g(a)f_a$ where $g$ is a continuous group isomorphism from the positive multiplicative reals to the additive reals. By the intermediate value theorem, there exists some positive real $e$ such that $g(e)=1$ (by our earlier classification of continuous group homomorphisms, we notice that $g$ is in fact the natural log). Notice that every deduction above follows from a natural question. We never need to guess anything to proceed. Is there any natural way like the above to derive the complex exponential? The only way I've seen it derived is as follows: Derive the real exponential by some method (inverse function to the natural log, which
|
Some Assumptions I will assume that you are ok with power series being used, just not Taylor's theorem. I will also assume you will allow us to observe a solution to a DE since you used it in your derivation. Defn A series of the form $\sum_{n=0}^{\infty}c_n\left(z-z_0\right)^n$ for $c_n,z,z_0\in\mathbb{C}$ is called a power series . Thm There is some $R\in[0,\infty]$ such that the power series above converges absolutely for all $z\in\mathbb{C}$ with $\mid z-z_0\mid R$. pf Use the geometric series' convergence. Lemma Inside the disk of convergence $\sum_{n=1}^{\infty}nc_n\left(z-z_0\right)^{n-1}$ is the derivative of the power series. Construction For power functions $y(z)=\sum c_nz^n$ can we find a unique solution to $y'(z)=y(z)$ in $\mathbb{C}$? We can observe that this implies $nc_n=c_{n-1}$. Hence Defn Let $E\left(z\right):=\sum_{n=0}^{\infty}\frac{1}{n!}z^n$. Thm 1) $E'=E$ 2) $E\left(z_1+z_2\right)=E\left(z_1\right)E\left(z_2\right)$ 3)$E_{\mid_{\mathbb{R}}}$ is strictly increasin
|
|complex-analysis|real-analysis|general-topology|
| 0
|
Paradox: increasing sequence that goes to $0$?
|
It is $10$ o'clock, and I have a box. Inside the box is a ball marked $1$. At $10$:$30$, I will remove the ball marked $1$, and add two balls, labeled $2$ and $3$. At $10$:$45$, I will remove the balls labeled $2$ and $3$, and add $4$ balls, marked $4$, $5$, $6$, and $7$. $7.5$ minutes before $11$, I will remove the balls labeled $4$, $5$, and $6$, and add $8$ balls, labeled $8$, $9$, $10$, $11$, $12$, $13$, $14$, and $15$. This pattern continues. Each time I reach the halfway point between my previous action and $11$ o'clock, I add some balls, and remove some other balls. Each time I remove one more ball than I removed last time, but add twice as many balls as I added last time. The result is that as it gets closer and closer to $11$, the number of balls in the box continues to increase. Yet every ball that I put in was eventually removed. So just how many balls will be in the box when the clock strikes $11$? $0$, or infinitely many? What's going on here?
|
[ Edit. ] I am editing my answer to try to give some more insight, prompted by comments on my original response. I hope to develop a deeper idea into what's going on here. The short answer is that what is important is not the size of the set of balls at any particular time, but rather how the set of balls changes; and ultimately what the limit of those sets are. The key is to determine what that limit is, and then determine how many balls are in that limiting set. The answer is that the limit is the empty set, which has size 0. The rest of this answer is devoted to describing this in some detail. Part of what I have added to my answer is to point out that although there are multiple ways of measuring convergence — in terms of various norms on characteristic functions — only one of these actually defines the limit of the sequence, and in this case the limit is well-defined. What matters is not the number of balls, but the set of balls In this problem, we have more than just a number of
|
|soft-question|paradoxes|
| 1
|
Interpolating between volume preserving diffeomorphisms of sphere
|
I know volume preserving diffeomorphisms of a $sphere^2$ make a grou'p sdiff($S_2$). I would to know if it is a Lie group, which I assume if it is that makes interpolation easier (like with rotations). So that is one question, is it a Lie group? Also is the group path connected? If so, how can I interpolate between two elements in the group? These are not subjects I know very little about. I apologize if I'm phrasing it in some way that sounds ridiculous.
|
First, as others have pointed out, the group of volume preserving diffeomorphisms will be infinite dimensional. For the second question, there is a beautiful technique known as Moser's trick which answers it. Moser's trick, in fancy language, says that if $(M,w)$ and $(M,w')$ are two symplectic structures on the same manifold, and if $[w] = [w']$ in $H^2(M;R)$ (de Rham cohomology), then there is a family of diffeomorphism $f_t:M \to M$ with $f_0 = Id$ and such that $f_1$ pulls $w'$ back to $w$ . For a $2$ -dimensional compact, oriented manifold (like the sphere), we have $H^2(M;R) = R$ (the real numbers), and a symplectic form is nothing but a nonzero element in $R$ (which can be interpreted as the total volume). Since in this setting, $[w] = [w']$ iff they both give the same signed volume, it follows from Moser's trick that the group of (signed) volume preserving maps is connected. If we consider unsigned volume, there will be $2$ components to the group diffeomorphisms preserving the
|
|geometry|abstract-algebra|
| 1
|
How do you prove that $p(n \xi)$ for $\xi$ irrational and $p$ a polynomial is uniformly distributed modulo 1?
|
The Weyl equidistribution theorem states that the sequence of fractional parts ${n \xi}$, $n = 0, 1, 2, \dots$ is uniformly distributed for $\xi$ irrational. This can be proved using a bit of ergodic theory, specifically the fact that an irrational rotation is uniquely ergodic with respect to Lebesgue measure. It can also be proved by simply playing with trigonometric polynomials (i.e., polynomials in $e^{2\pi i k x}$ for $k$ an integer) and using the fact they are dense in the space of all continuous functions with period 1. In particular, one shows that if $f(x)$ is a continuous function with period 1, then for any $t$, $\int_0^1 f(x) dx = \lim \frac{1}{N} \sum_{i=0}^{N-1} f(t+i \xi)$. One shows this by checking this (directly) for trigonometric polynomials via the geometric series. This is a very elementary and nice proof. The general form of Weyl's theorem states that if $p$ is a monic integer-valued polynomial, then the sequence ${p(n \xi)}$ for $\xi$ irrational is uniformly distr
|
There is a fairly good exposition in Terry Tao's post , see Corollaries 4-6. Here is a sketch: We prove the more general statement: Let $p(n)= \chi n^d + a_{d-1} n^{d-1} + \cdots + a_1 n + a_0$ be any polynomial, with $\chi$ irrational. Then $p(n) \mod 1$ is equidistributed. Our proof is by induction on $d$; the base case $d=1$ is standard. Set $e(x) = e^{2 \pi i x}$. By the standard trickery with exponential polynomials, it is enough to show $$\sum_{n=0}^{N-1} e(p(n)) = o(N).$$ Choose a positive integer $h$. With a small error, we can replace the sum by $$\sum_{n=0}^{N-1} (1/h) \left( e(p(n)) + e(p(n+1)) + \cdots + e(p(n+h-1)) \right).$$ By Cauchy-Schwarz, this is bounded by $$\frac{\sqrt{N}}{h} \left[ \sum_{n=0}^{N-1} \left( e(p(n)) + \cdots + e(p(n+h-1)) \right) \overline{ \left( e(p(n)) + \cdots + e(p(n+h-1)) \right)} \right]^{1/2}.$$ Expanding the inner sum, we get $h^2$ terms of the form $e(p(n) - p(n+k))$. There are $h$ terms where $k=0$; these each sum up to $N$. For the other
|
|number-theory|
| 1
|
Group With an Endomorphism That is "Almost" Abelian is Abelian.
|
Suppose a finite group has the property that for every $x, y$, it follows that \begin{equation*} (xy)^3 = x^3 y^3. \end{equation*} How do you prove that it is abelian? Edit: I recall that the correct exercise needed in addition that the order of the group is not divisible by 3.
|
On the other hand, if the order of your group is not a multiple of 3 then it must be abelian! You can read a proof here
|
|group-theory|
| 1
|
Natural derivation of the complex exponential function?
|
Bourbaki shows in a very natural way that every continuous group isomorphism of the additive reals to the positive multiplicative reals is determined by its value at $1$, and in fact, that every such isomorphism is of the form $f_a(x)=a^x$ for $a>0$ and $a\neq 1$. We get the standard real exponential (where $a=e$) when we notice that for any $f_a$, $(f_a)'=g(a)f_a$ where $g$ is a continuous group isomorphism from the positive multiplicative reals to the additive reals. By the intermediate value theorem, there exists some positive real $e$ such that $g(e)=1$ (by our earlier classification of continuous group homomorphisms, we notice that $g$ is in fact the natural log). Notice that every deduction above follows from a natural question. We never need to guess anything to proceed. Is there any natural way like the above to derive the complex exponential? The only way I've seen it derived is as follows: Derive the real exponential by some method (inverse function to the natural log, which
|
Also, if you are OK with power series, the Prologue to Walter Rudin's Real and Complex Analysis seems like exactly what you want. It's a beautiful development of exp , as well as sin , cos , e , and even π , all quite organically.
|
|complex-analysis|real-analysis|general-topology|
| 0
|
Natural derivation of the complex exponential function?
|
Bourbaki shows in a very natural way that every continuous group isomorphism of the additive reals to the positive multiplicative reals is determined by its value at $1$, and in fact, that every such isomorphism is of the form $f_a(x)=a^x$ for $a>0$ and $a\neq 1$. We get the standard real exponential (where $a=e$) when we notice that for any $f_a$, $(f_a)'=g(a)f_a$ where $g$ is a continuous group isomorphism from the positive multiplicative reals to the additive reals. By the intermediate value theorem, there exists some positive real $e$ such that $g(e)=1$ (by our earlier classification of continuous group homomorphisms, we notice that $g$ is in fact the natural log). Notice that every deduction above follows from a natural question. We never need to guess anything to proceed. Is there any natural way like the above to derive the complex exponential? The only way I've seen it derived is as follows: Derive the real exponential by some method (inverse function to the natural log, which
|
I think essentially the same characterization holds. The complex exponential is the unique Lie group homomorphism from $\mathbb{C}$ to $\mathbb{C}^*$ such that the (real) derivative at the identity is the identity matrix.
|
|complex-analysis|real-analysis|general-topology|
| 1
|
Useful examples of pathological functions
|
What are some particularly well-known functions that exhibit pathological behavior at or near at least one value and are particularly useful as examples? For instance, if $f'(a) = b$, then $f(a)$ exists, $f$ is continuous at $a$, $f$ is differentiable at $a$, but $f'$ need not be continuous at $a$. A function for which this is true is $f(x) = x^2 \sin(1/x)$ at $x=0$.
|
The Cauchy functionals , which satisfy the very simple equation f(a+b) = f(a)+f(b) for all real a,b . These are either a line through the origin (the "nice" ones) or really "ugly" functions that are discontinuous and unbounded in every interval. The latter are possible because the Axiom of Choice implies (actually is equivalent to) that infinite dimensional vector spaces have bases; i.e. the reals over the rationals have a Hamel basis. A great explanation of all this (including the nice/ugly terminology) is in Horst Herrlich's monograph The Axiom of Choice .
|
|soft-question|big-list|calculus|real-analysis|
| 0
|
Intuitive Way To Understand Principal Component Analysis
|
I know that this is meant to explain variance but the description on Wikipedia stinks and it is not clear how you can explain variance using this technique Can anyone explain it in a simple way?
|
PCA basically is a projection of a higher-dimensional space into a lower dimensional space while preserving as much information as possible. I wrote a blog post where I explain PCA via the projection of a 3D-teapot... ...onto a 2D-plane while preserving as much information as possible: Details and full R-code can be found in the post: http://blog.ephorie.de/intuition-for-principal-component-analysis-pca
|
|statistics|terminology|intuition|visualization|descriptive-statistics|
| 0
|
Sum of two periodic functions
|
Let $f$ and $g$ be two periodic functions over $\Bbb{R}$ with the following property: If $T$ is a period of $f$, and $S$ is a period of $g$, then $T/S$ is irrational. Conjecture : $f+g$ is not periodic. Could you give a proof or a counter example? It is easier if we assume continuity. But is it true for arbitrary real valued functions?
|
If each function has a smallest period, and otherwise fits the conditions, then a proof may be forthcoming by attempting to compute the smallest period of the sum and failing. However, things become unclear if there is no smallest period, as in the case of the characteristic function of the rationals. Progress might be made in this case by decomposing such a function as an infinite sum of periodic functions, or at least give more counterexamples to study. (e.g. Write the characteristic function of the rationals as an infinite sum of functions of smallest period 1. )
|
|analysis|
| 0
|
Best Algebraic Geometry text book? (other than Hartshorne)
|
Lifted from Mathoverflow : I think (almost) everyone agrees that Hartshorne's Algebraic Geometry is still the best. Then what might be the 2nd best? It can be a book, preprint, online lecture note, webpage, etc. One suggestion per answer please. Also, please include an explanation of why you like the book, or what makes it unique or useful.
|
I'm really enjoying Andreas Gathmann's lecture notes . They are pretty elementary and surprisingly complete (for lecture notes). Reid also has a really nice text on algebraic geometry («Undergraduate algebraic geometry»).
|
|big-list|reference-request|algebraic-geometry|
| 0
|
Watchdog Problem
|
I just came up with this problem yesterday. Problem : Assume there is an important segment of straight line AB that needs to be watched at all time. A watchdog can see in one direction in front of itself and must walk at a constant non-zero speed at all time. (All watchdogs don't need to have the same speed.) When it reaches the end of the segment, it must turn back (at no time) and keep watching the line. How many watchdogs does it need to guarantee that the line segment is watched at all time? And how (initial positions and speeds of the dogs)? Note : It's clear that two dogs are not enough. I conjecture that four will suffice and three will not. For example, the below configuration doesn't work from 7.5 second if AB 's length is 10 meters. Dog 1 at A walks to the right with speed 1.0 m/s Dog 2 at between A and B walks to the right with speed 1.0 m/s Dog 3 at B walks to the left with speed 1.0 m/s Or it can be illustrated as: A ---------------------------------------- B 0.0 sec 1 -->
|
I'll make the trivial answer: 1 dog at point A, facing point B, walking with a velocity of 0. Presumably, you should really highlight that the dogs' velocities must be non-zero...this is the kind of side case that math people love to exploit.
|
|discrete-mathematics|recreational-mathematics|puzzle|word-problem|
| 0
|
Real world uses of hyperbolic trigonometric functions
|
I covered hyperbolic trigonometric functions in a recent maths course. However I was never presented with any reasons as to why (or even if) they are useful. Is there any good examples of their uses outside academia?
|
Velocity addition in (special) relativity is not linear, but becomes linear when expressed in terms of hyperbolic tangent functions. More precisely, if you add two motions in the same direction, such as a man walking at velocity $v_1$ on a train that moves at $v_2$ relative to the ground, the velocity $v$ of the man relative to ground is not $v_1 + v_2$; velocities don't add (otherwise by adding enough of them you could exceed the speed of light). What does add is the inverse hyperbolic tangent of the velocities (in speed-of-light units, i.e., $v/c$). $$\tanh^{-1}(v/c)=\tanh^{-1}(v_1/c) + \tanh^{-1}(v_2/c)$$ This is one way of deriving special relativity: assume that a velocity addition formula holds, respecting a maximum speed of light and some other assumptions, and show that it has to be the above.
|
|soft-question|big-list|applications|hyperbolic-functions|
| 0
|
Division by imaginary number
|
I ran into a problem dividing by imaginary numbers recently. I was trying to simplify: $2 \over i$ I came up with two methods, which produced different results: Method 1: ${2 \over i} = {2i \over i^2} = {2i \over -1} = -2i$ Method 2: ${2 \over i} = {2 \over \sqrt{-1}} = {\sqrt{4} \over \sqrt{-1}} = \sqrt{4 \over -1} = \sqrt{-4} = 2i$ I know from using the formula from this Wikipedia article that method 1 produces the correct result. My question is: why does method 2 give the incorrect result ? What is the invalid step?
|
The incorrect step is saying: $\sqrt{4}/\sqrt{-1} = \sqrt{4/-1}$ The identity: $\sqrt{a}/\sqrt{b} = \sqrt{a/b}$ is only justified when $a$ and $b$ are positive.
|
|algebra-precalculus|complex-numbers|arithmetic|
| 1
|
Division by imaginary number
|
I ran into a problem dividing by imaginary numbers recently. I was trying to simplify: $2 \over i$ I came up with two methods, which produced different results: Method 1: ${2 \over i} = {2i \over i^2} = {2i \over -1} = -2i$ Method 2: ${2 \over i} = {2 \over \sqrt{-1}} = {\sqrt{4} \over \sqrt{-1}} = \sqrt{4 \over -1} = \sqrt{-4} = 2i$ I know from using the formula from this Wikipedia article that method 1 produces the correct result. My question is: why does method 2 give the incorrect result ? What is the invalid step?
|
This is exactly the same issue as in this question . Each non-zero complex number has two numbers that square to give it, with the same magnitude, but with opposite sign. When we define the square root function, we have to decide which of the roots we want. For positive numbers, it is obvious to choose the positive root. For negative number, we choose to have the positive imaginary values, although because of symmetry the choice doesn't mean much anyway. So, to see if the standard multiplication and division laws apply, then we have to consider domain the numbers are in. We already know they apply for non-negative real numbers. It is easy enough to verify that for negative numbers $$\sqrt{a}*\sqrt{b}=-\sqrt{ab}$$ and $$\frac{\sqrt{a}}{\sqrt{b}}=\sqrt{\frac{a}{b}}$$ We also see that, if $a$ is positive and $b$ negative, then $$\sqrt{ab}=\sqrt{a}\sqrt{b}$$ $$\sqrt{\frac{a}{b}}=-\frac{\sqrt{a}}{\sqrt{b}}$$ and $$\sqrt{\frac{b}{a}}=\frac{\sqrt{b}}{\sqrt{a}}$$
|
|algebra-precalculus|complex-numbers|arithmetic|
| 0
|
Intuitive Way To Understand Principal Component Analysis
|
I know that this is meant to explain variance but the description on Wikipedia stinks and it is not clear how you can explain variance using this technique Can anyone explain it in a simple way?
|
Principal component analysis is a useful technique when dealing with large datasets. In some fields, (bioinformatics, internet marketing, etc) we end up collecting data which has many thousands or tens of thousands of dimensions. Manipulating the data in this form is not desirable, because of practical considerations like memory and CPU time. However, we can't just arbitrarily ignore dimensions either. We might lose some of the information we are trying to capture! Principal component analysis is a common method used to manage this tradeoff. The idea is that we can somehow select the 'most important' directions, and keep those, while throwing away the ones that contribute mostly noise. For example, this picture shows a 2D dataset being mapped to one dimension: Note that the dimension chosen was not one of the original two: in general, it won't be, because that would mean your variables were uncorrelated to begin with. We can also see that the direction of the principal component is the
|
|statistics|terminology|intuition|visualization|descriptive-statistics|
| 1
|
What calculation shortcuts exist to help or speed-up mental (or paper) calculations?
|
Anything to speed up or simplify calculations. A simple example might be to get a multiple of $19$, for instance, $38 \cdot 19 = 38 \cdot 20 - 38$. (This is hard to tag with so few tags in play!) mental-calculations tips tricks shortcut cheats time-saver
|
Art Benjamin is your man! He has many tricks to speed up mental calculation and other fun mathemagical tricks. He also wrote two books on the subject! Here is a video of him in action: http://www.youtube.com/watch?v=M4vqr3_ROIk Here is his new book: http://www.amazon.com/Secrets-Mental-Math-Mathemagicians-Calculation/dp/0307338401
|
|big-list|arithmetic|
| 0
|
What calculation shortcuts exist to help or speed-up mental (or paper) calculations?
|
Anything to speed up or simplify calculations. A simple example might be to get a multiple of $19$, for instance, $38 \cdot 19 = 38 \cdot 20 - 38$. (This is hard to tag with so few tags in play!) mental-calculations tips tricks shortcut cheats time-saver
|
To square a number ending in 5: Remove the ending 5. Let the resulting number be n, and compute n(n+1). Append 25 to the end of n(n+1) and that's your answer. Example: 85 2 . Here, we drop the last digit to get 8, compute 8*9 = 72, so 85 2 = 7225. Similarly, we can compute 115 2 . Here, we drop the last digit to get 11, compute 11*12 = 132, so 115 2 = 13225. How does this work?: Note that (10n + 5) 2 = 100n 2 + 100n + 25 = 100 * n(n+1) + 25.
|
|big-list|arithmetic|
| 0
|
What are Your Favourite Maths Puzzles?
|
We all love a good puzzle To a certain extent, any piece of mathematics is a puzzle in some sense: whether we are classifying the homological intersection forms of four manifolds or calculating the optimum dimensions of a cylinder, it is an element of investigation and inherently puzzlish intrigue that drives us. Indeed most puzzles (cryptic crosswords aside) are somewhat mathematical (the mathematics of Sudoku for example is hidden in latin squares ). Mathematicians and puzzles get on, it seems, rather well. But what is a good puzzle? Okay, so in order to make this question worthwhile (and not a ten-page wade-athon through 57 varieties of the men with red and blue hats puzzle), we are going to have to impose some limitations. Not every puzzle-based answer that pops into your head will qualify as a good puzzle—to do so it must Not be widely known: If you have a terribly interesting puzzle that motivates something in cryptography; well done you, but chances are we've seen it. If you saw
|
Here are three of my favorite variations on the hats and prisoners puzzle that I've collected over time: Fifteen prisoners sit in a line, and hats are placed on their heads. Each hat can be one of two colors: white or black. They can see the colors of the people in front of them but not behind them, and they can’t see their own hat colors. Starting from the back of the line (with the person who can see every hat except his own), each prisoner must try to guess the color of his own hat. If he guesses correctly, he escapes. Otherwise, he is fed to cannibals (because that’s the canonical punishment for failing at hat problems). Each prisoner can hear the guess of each person behind him. By listening for painful screaming and the cheering of cannibals, he can also deduce if each of those guesses was accurate. Of course, this takes place in some magical mathematical universe where people don’t cheat. Assuming that they do not want to be eaten, find the optimal guessing strategy for the pris
|
|soft-question|recreational-mathematics|puzzle|big-list|
| 0
|
What's the most effective ways of teaching kids - times tables?
|
I'd like to help a $6$ year old who already has a pretty good grasp of $2$, $5$, and $10$ times tables.
|
My personal experience with my 7/8 year old is that Math War works very well for practice. You can play with a normal deck of cards---each player plays two cards, and announces the product, with the higher number winning all the cards. In case of tie, then there is "War", which means you put two (or more) cards face down and then have aother battle on top, with the winner taking the whole pile. You can also play the game with any set of ordinary flash cards. My son is interested to go for a long time with this... I would also add, however, that although I have heard many people say that one must master the basic math facts before learning more advanced math concepts, I think that this is nonsense. Go ahead and talk about any kind of mathematics (at the right level) with your child.
|
|soft-question|education|
| 0
|
Proof that $n^3+2n$ is divisible by $3$
|
I'm trying to freshen up for school in another month, and I'm struggling with the simplest of proofs! Problem: For any natural number $n , n^3 + 2n$ is divisible by $3.$ This makes sense Proof: Basis Step: If $n = 0,$ then $n^3 + 2n = 0^3 +$ $2 \times 0 = 0.$ So it is divisible by $3.$ Induction: Assume that for an arbitrary natural number $n$, $n^3+ 2n$ is divisible by $3.$ Induction Hypothesis: To prove this for $n+1,$ first try to express $( n + 1 )^3 + 2( n + 1 )$ in terms of $n^3 + 2n$ and use the induction hypothesis. Got it $$( n + 1 )^3+ 2( n + 1 ) = ( n^3 + 3n^2+ 3n + 1 ) + ( 2n + 2 ) \{\text{Just some simplifying}\}$$ $$ = ( n^3 + 2n ) + ( 3n^2+ 3n + 3 ) \{\text{simplifying and regrouping}\}$$ $$ = ( n^3 + 2n ) + 3( n^2 + n + 1 ) \{\text{factored out the 3}\}$$ which is divisible by $3$, because $(n^3 + 2n )$ is divisible by $3$ by the induction hypothesis. What? Can someone explain that last part? I don't see how you can claim $(n^3+ 2n ) + 3( n^2 + n + 1 )$ is divisible by
|
In a proof by induction, we try to prove that a statement is true for all integers $n.$ To do this, we first check the base case, which is the "Basic Step" above. Then, we have the induction hypothesis, where we assume that the statement is true for an integer $k.$ Using this fact, we prove that the statement is also true for the next integer $k+1.$ This produces a bootstrapping ladder where we use the base case $(n=1)$ to show that the statement is true for $n=2$ via the inductive hypothesis, and then for $n=3,$ etc, off to infinity; this shows that the statement is true for all integers $n.$ Here, we claimed that $( n^3+ 2n ) + 3( n^2+ n + 1 )$ is divisible by $3$ because this was the inductive hypothesis; we were using this to show that $[(n+1)^3 + 2(n+1)] + 3[ (n+1)^2 + (n+1) + 1 ].$
|
|elementary-number-theory|induction|
| 0
|
Proof that $n^3+2n$ is divisible by $3$
|
I'm trying to freshen up for school in another month, and I'm struggling with the simplest of proofs! Problem: For any natural number $n , n^3 + 2n$ is divisible by $3.$ This makes sense Proof: Basis Step: If $n = 0,$ then $n^3 + 2n = 0^3 +$ $2 \times 0 = 0.$ So it is divisible by $3.$ Induction: Assume that for an arbitrary natural number $n$, $n^3+ 2n$ is divisible by $3.$ Induction Hypothesis: To prove this for $n+1,$ first try to express $( n + 1 )^3 + 2( n + 1 )$ in terms of $n^3 + 2n$ and use the induction hypothesis. Got it $$( n + 1 )^3+ 2( n + 1 ) = ( n^3 + 3n^2+ 3n + 1 ) + ( 2n + 2 ) \{\text{Just some simplifying}\}$$ $$ = ( n^3 + 2n ) + ( 3n^2+ 3n + 3 ) \{\text{simplifying and regrouping}\}$$ $$ = ( n^3 + 2n ) + 3( n^2 + n + 1 ) \{\text{factored out the 3}\}$$ which is divisible by $3$, because $(n^3 + 2n )$ is divisible by $3$ by the induction hypothesis. What? Can someone explain that last part? I don't see how you can claim $(n^3+ 2n ) + 3( n^2 + n + 1 )$ is divisible by
|
Why don't you just test the validity of this using modular arithmetic? Take $n \equiv 1 \pmod 3$ and we easily get $3 \equiv 0 \pmod 3$. If you try $n \equiv 2 \pmod 3$,you get $8+4\equiv 12 \equiv 0 \pmod 3$. so you're done. No ugly inductions (although this particular case is not so dirty). A useful idea when thinking of induction is to think of dominos. If you know something is true for one fixed tile and if you know that it being true for one tile means that it's true for the neighbour on the right, then it's like knocking one over knocks them all over.
|
|elementary-number-theory|induction|
| 0
|
Proof that $n^3+2n$ is divisible by $3$
|
I'm trying to freshen up for school in another month, and I'm struggling with the simplest of proofs! Problem: For any natural number $n , n^3 + 2n$ is divisible by $3.$ This makes sense Proof: Basis Step: If $n = 0,$ then $n^3 + 2n = 0^3 +$ $2 \times 0 = 0.$ So it is divisible by $3.$ Induction: Assume that for an arbitrary natural number $n$, $n^3+ 2n$ is divisible by $3.$ Induction Hypothesis: To prove this for $n+1,$ first try to express $( n + 1 )^3 + 2( n + 1 )$ in terms of $n^3 + 2n$ and use the induction hypothesis. Got it $$( n + 1 )^3+ 2( n + 1 ) = ( n^3 + 3n^2+ 3n + 1 ) + ( 2n + 2 ) \{\text{Just some simplifying}\}$$ $$ = ( n^3 + 2n ) + ( 3n^2+ 3n + 3 ) \{\text{simplifying and regrouping}\}$$ $$ = ( n^3 + 2n ) + 3( n^2 + n + 1 ) \{\text{factored out the 3}\}$$ which is divisible by $3$, because $(n^3 + 2n )$ is divisible by $3$ by the induction hypothesis. What? Can someone explain that last part? I don't see how you can claim $(n^3+ 2n ) + 3( n^2 + n + 1 )$ is divisible by
|
In the inductive hypothesis, you assumed that $n^3 + 2n$ was divisible by 3 for some $n$, and now you're proving the same for $n+1$. It's like knocking down dominoes: if you can prove that the first domino falls over (base case) and each domino knocks over the next (inductive step), then that means that all of the dominoes get knocked down eventually. You know that $(n^3 + 2n) + 3(n^2 + n + 1)$ is divisible by 3 because $n^3 + 2n$ is (because of the inductive hypothesis) and $3(n^2 + n + 1)$ is (because it's 3 times an integer). So, their sum is as well.
|
|elementary-number-theory|induction|
| 1
|
Proof that $n^3+2n$ is divisible by $3$
|
I'm trying to freshen up for school in another month, and I'm struggling with the simplest of proofs! Problem: For any natural number $n , n^3 + 2n$ is divisible by $3.$ This makes sense Proof: Basis Step: If $n = 0,$ then $n^3 + 2n = 0^3 +$ $2 \times 0 = 0.$ So it is divisible by $3.$ Induction: Assume that for an arbitrary natural number $n$, $n^3+ 2n$ is divisible by $3.$ Induction Hypothesis: To prove this for $n+1,$ first try to express $( n + 1 )^3 + 2( n + 1 )$ in terms of $n^3 + 2n$ and use the induction hypothesis. Got it $$( n + 1 )^3+ 2( n + 1 ) = ( n^3 + 3n^2+ 3n + 1 ) + ( 2n + 2 ) \{\text{Just some simplifying}\}$$ $$ = ( n^3 + 2n ) + ( 3n^2+ 3n + 3 ) \{\text{simplifying and regrouping}\}$$ $$ = ( n^3 + 2n ) + 3( n^2 + n + 1 ) \{\text{factored out the 3}\}$$ which is divisible by $3$, because $(n^3 + 2n )$ is divisible by $3$ by the induction hypothesis. What? Can someone explain that last part? I don't see how you can claim $(n^3+ 2n ) + 3( n^2 + n + 1 )$ is divisible by
|
Presumably you're only looking for a way to understand the induction problem, but you can note that $n^3+2n = n^3 - n + 3n = (n-1)(n)(n+1) + 3n$. Since any three consecutive integers has a multiple of three, we're adding two multiples of three and so get another multiple of 3.
|
|elementary-number-theory|induction|
| 0
|
$f(a(x))=f(x)$ - functional equation
|
I was reading "Functional Equations and How to Solve Them" by Small and the following comment pops up without much justification on p. 13: If $a(x)$ is an involution, then $f(a(x))=f(x)$ has as solutions $f(x) = T\,[x,a(x)]$, where $T$ is an arbitrary symmetric function of $u$ and $v$. I was wondering why this was true (it works for examples I've tried, but I am not sure $(1)$ how to prove this and $(2)$ if there's anything obvious staring at me in the face here).
|
Any function f(x) that is a solution to your functional equation f(a(x)) = f(x) must satisfy the property that it is unchanged when you plug in a(x) instead of x. In addition, f(x) clearly must be a function f(x) = T[x, a(x)] depending on the x and a(x); the question is to see why T must be symmetric. Now, T[x, a(x)] = f(x) = f(a(x)) = T[a(x), a(a(x))] = T[a(x), x] since a(x) is an involution. In particular, this means that T must be symmetric in its two variables.
|
|functional-equations|
| 1
|
How can I tell which matrix decomposition to use for OLS?
|
I want to find the least squares solution to $\boldsymbol{Ax}=\boldsymbol{b}$ where $\boldsymbol{A}$ is a highly sparse square matrix. I found two methods that look like they might lead me to a solution: QR factorization , and singular value decomposition . Unfortunately, I haven't taken linear algebra yet, so I can't really understand most of what those pages are saying. I can calculate both in Matlab though, and it looks like the SVD gave me a smaller squared error. Why did that happen? How can I know which one I should be using in the future?
|
It's all dependent on the 2-norm condition number of your matrix (the ratio of the largest singular value to the smallest); as a nice rule of thumb, if the base-10 logarithm of the reciprocal of the condition number is much less than the number of digits your computer uses to store numbers, QR (and maybe even the normal equations) might be sufficient. Otherwise, SVD is a "safer bet": it always works, but is much slower than the other methods for solving least squares problems. Good references would be the classic "Solving Least Squares Problems" by Lawson and Hanson, and the newer "Numerical Methods for Least Squares Problems" by Björck. To add to the answer I gave previously, one way you can proceed for a matrix A whose conditioning you don't know would be as follows: Compute the QR decomposition of A. Estimate the condition number of R. If R is well conditioned (condition number is "small enough"), stop; else Compute the singular value decomposition of R=U∑V T Multiply Q and U to get
|
|linear-algebra|numerical-methods|numerical-linear-algebra|
| 1
|
Is there a geometrical interpretation to the notion of eigenvector and eigenvalues?
|
The wiki article on eigenvectors offers the following geometrical interpretation: Each application of the matrix to an arbitrary vector yields a result which will have rotated towards the eigenvector with the largest eigenvalue. Qn 1: If there is any other geometrical interpretation particularly in the context of a covariance matrix? The wiki also discusses the difference between left and right eigenvectors. Qn 2: Do the above geometrical interpretations hold irrespective of whether they are left or right eigenvectors?
|
Instead of giving an answer, let me point out to you this chapter in Cleve Moler's book "Numerical Computing with MATLAB", there is a nice geometric demonstration in MATLAB on how eigenvalues/eigenvectors (as well as singular values/vectors) of an order-2 square matrix are involved in how a circle is transformed into an ellipse after a linear transformation represented by the matrix.
|
|linear-algebra|eigenvalues-eigenvectors|intuition|
| 0
|
Why are superalgebras so important?
|
I know that a superalgebra is a $\mathbb Z/2\mathbb Z$-graded algebra and that it behaves nicely. I know very little physics though, so even though I know that the super- prefix is related to supersymmetry, I don't know what that means; is there a compelling mathematical reason to consider superalgebras?
|
I can summarize one really basic reason, which is actually the reason I originally got interested in the definition. Take a finite-dimensional vector space $V$ of dimension $n$, and let $\left( {n \choose k} \right) = {n+k-1 \choose k}$ denote the number of multisets of size $k$ on a set of size $n$. (Multisets are like subsets except that more than one copy of a given element is possible.) Then the symmetric powers of $V$ have dimensions $\displaystyle \left( {n \choose 1} \right), \left( {n \choose 2} \right), ... $ whereas the exterior powers of $V$ have dimensions $\displaystyle {n \choose 1}, {n \choose 2}, ....$ Now here is a funny identity: it is not hard to see that ${n \choose k} = (-1)^k \left( {-n \choose k} \right)$. One way we might interpret this identity is that the $k^{th}$ exterior power of a vector space of dimension $n$ is like the $k^{th}$ symmetric power of a vector space of dimension $-n$, whatever that means. So what could that possibly mean? The answer (and I'll
|
|abstract-algebra|
| 0
|
Is the $24$ game NP-complete?
|
The $24$ game is as follows. Four numbers are drawn; the player's objective is to make $24$ from the four numbers using the four basic arithmetic operations (in any order) and parentheses however one pleases. Consider the following generalization. Given $n+1$ numbers, determine whether the last one can be obtained from the first $n$ using elementary arithmetical operations as above. This problem admits succinct certificates so is in NP. Is it NP-complete$?$
|
It's worth considering a few ways of showing that the problem is neither in P, nor NPC. I've marked this answer "community wiki", so please feel free to add suggestions and flesh out ideas here. Based on my experience playing the 24 game, it seems that most combinations of numbers are solvable. If we could formalize this, we could show that the 24 game is not NPC. Formally, consider the 2^n inputs of length n . If all but polynomially-many of them solvable, then the language is sparse and cannot be NPC (unless P=NP).
|
|computer-science|
| 0
|
Non-integer powers of negative numbers
|
Roots behave strangely over complex numbers . Given this, how do non-integer powers behave over negative numbers? More specifically: Can we define fractional powers such as $(-2)^{-1.5}$? Can we define irrational powers $(-2)^\pi$?
|
We can define complex powers as $z^w := e^{(l(z)*w)}$ for $z,w \in \mathbb{C}$ and a complex logarithm function $l$ of course you need to make sure there actually exists such a logarithm $l:U\to\mathbb{C}$ function ($U$ needs to be a simply-connected subset of $\mathbb{C}\setminus\{0\}$
|
|complex-numbers|exponentiation|
| 0
|
Pick's Theorem on a triangular (or hex) grid
|
Pick's theorem says that given a square grid consisting of all points in the plane with integer coordinates, and a polygon without holes and non selt-intersecting whose vertices are grid points, its area is given by: $$i + \frac{b}{2} - 1$$ where $i$ is the number of interior lattice points and $b$ is the number of points on its boundary. Theorem and proof may be found on Wikipedia . Let us suppose that the grid is not square but triangular (or hexagonal). Does a similar theorem hold?
|
(For an integer λ and a lattice polygon P) denote by λP polygon P stretched by λ times. Then number N(λP) of points inside the polygon λ is a quadratic polynomial in λ with leading coefficient S(P). This is the form of Pick's theorem that holds for any lattice (and obvious analogue works in any dimension — unlike usual Pick's formula that has no analogue in 3d even for the cubic lattice). For the square lattice it yields ordinary Pick's theorem since for a parallelogram P on the square lattice $N(\lambda P)\approx\lambda^2(i+\frac{b}{2}-1)S(P)$ as λ→∞ and general theorem follows from the theorem for parallelograms by additivity.
|
|geometry|discrete-geometry|integer-lattices|
| 0
|
Solution to $1-f(x) = f(-x)$
|
Can we find $f(x)$ given that $1-f(x) = f(-x)$ for all real $x$? I start by rearranging to: $f(-x) + f(x) = 1$. I can find an example such as $f(x) = |x|$ that works for some values of $x$, but not all. Is there a method here? Is this possible?
|
$$f(x)=\frac{1}{2}+\text{(any odd function)}.$$ For example, $f(x)=\frac{1}{2}+x$ or, say, $f(x)=\frac{1}{2}+99x^3+7x^5$.
|
|algebra-precalculus|functional-equations|
| 0
|
Solution to $1-f(x) = f(-x)$
|
Can we find $f(x)$ given that $1-f(x) = f(-x)$ for all real $x$? I start by rearranging to: $f(-x) + f(x) = 1$. I can find an example such as $f(x) = |x|$ that works for some values of $x$, but not all. Is there a method here? Is this possible?
|
Let $$f(x) = \begin{cases} 1 \quad x>0, \\ 1/2 \quad x=0, \\ 0 \quad x If $x > 0$, then $-x Likewise with $x If $x=0$, then $f(x)+f(-x)=(1/2)+(1/2)=1$.
|
|algebra-precalculus|functional-equations|
| 0
|
Solution to $1-f(x) = f(-x)$
|
Can we find $f(x)$ given that $1-f(x) = f(-x)$ for all real $x$? I start by rearranging to: $f(-x) + f(x) = 1$. I can find an example such as $f(x) = |x|$ that works for some values of $x$, but not all. Is there a method here? Is this possible?
|
Clearly, we only have relations between f(x) and f(-x). The relation means that 0 has to have value 1/2. We can divide all non-zero real numbers into disjoint pairs of x and -x and define the function f on each pair separately. For each pair, f(x) can be given any value and then f(-x) has a single valid value. As mentioned by Grigory , the valid functions can be characterised as any odd function plus 1/2.
|
|algebra-precalculus|functional-equations|
| 0
|
Solution to $1-f(x) = f(-x)$
|
Can we find $f(x)$ given that $1-f(x) = f(-x)$ for all real $x$? I start by rearranging to: $f(-x) + f(x) = 1$. I can find an example such as $f(x) = |x|$ that works for some values of $x$, but not all. Is there a method here? Is this possible?
|
Usually simple problems like this ask you to find a function that respects the condition, not all of them. And (again) usually you start by checking if a simple polynomial function of the first degree could be a solution. So, if $$f(x) = ax + b$$ Then $$f(x) + f(-x) = 1 \implies ax + b + a \cdot (-x) + b = 1 \implies 2b = 1 \implies b = 1/2$$ So the condition is satisfied by any function of the type: $$f(x) = ax + 1/2$$
|
|algebra-precalculus|functional-equations|
| 0
|
Division by imaginary number
|
I ran into a problem dividing by imaginary numbers recently. I was trying to simplify: $2 \over i$ I came up with two methods, which produced different results: Method 1: ${2 \over i} = {2i \over i^2} = {2i \over -1} = -2i$ Method 2: ${2 \over i} = {2 \over \sqrt{-1}} = {\sqrt{4} \over \sqrt{-1}} = \sqrt{4 \over -1} = \sqrt{-4} = 2i$ I know from using the formula from this Wikipedia article that method 1 produces the correct result. My question is: why does method 2 give the incorrect result ? What is the invalid step?
|
The only soundproof way to be sure to find the right result while dividing two complex numbers $$\frac{a+bi}{c+di}$$ is reducing it to a multiplication. The answer is of the form $x+yi$ ; therefore $$(c+di)(x+yi) = a+bi$$ and you will end up with two linear equations, one for the real coefficient and another for the imaginary one. As Simon and Casebash already wrote, taking a square root leads to problems, since you cannot be sure which value must be chosen.
|
|algebra-precalculus|complex-numbers|arithmetic|
| 0
|
Are there variations on least-squares approximations?
|
In least-squares approximations the normal equations act to project a vector existing in N-dimensional space onto a lower dimensional space, where our problem actually lies, thus providing the "best" solution we can hope for (the orthogonal projection of the N-vector onto our solution space). The "best" solution is the one that minimizes the Euclidean distance (two-norm) between the N-dimensional vector and our lower dimensional space. There exist other norms and other spaces besides $\mathbb{R}^d$, what are the analogues of least-squares under a different norm, or in a different space?
|
The usual cases treated apart from least squares are the one-norm and infinity-norm (Chebyshev) cases; they crop up in function approximation for instance. Usually both of these are solved via linear programming techniques .
|
|linear-algebra|big-list|approximation|regression|
| 0
|
Simple lowpass frequency response
|
Okay, so hopefully this isn't too hard or off-topic. Let's say I have a very simple lowpass filter (something that smooths out a signal), and the filter object has a position variable and a cutoff variable (between 0 and 1). In every step, a value is put into the following bit of pseudocode as "input": position = position*(1-c)+input*c , or more mathematically, f(n) = f(n-1)*(1-c)+x[n]*c . The output is the value of "position." Basically, it moves a percentage of the distance between the current position and then input value, stores this value internally, and returns it as output. It's intentionally simplistic, since the project I'm using this for is going to have way too many of these in sequence processing audio in real time. Given the filter design, how do I construct a function that takes input frequency (where 1 means a sine wave with a wavelength of 2 samples, and .5 means a sine wave with wavelength 4 samples, and 0 is a flat line), and cutoff value (between 1 and 0, as shown ab
|
As I understand it, you are given a sequence $(x_n)_{n\in\mathbb{N}}$ of input values from which you calculate a sequence $f_n$ that is given by the following recurrence relations: $f_0 = 0$ $f_{n+1} = (1-c)f_n + c\cdot x_{n+1}$ Your question is: given a sine wave $x_n=\sin(\omega n)$, you assert that $f_n$ is also a sine wave and you want to know its amplitude in dependence on the frequency $\omega$. Answer : It's easier to calculate the frequency response with exponential functions instead of sine waves. $f_n = Ae^{i\omega n}$ $x_n = e^{i\omega n}$ Since $f_{n+1} = e^{i\omega (n+1)} = e^{i\omega} e^{i\omega n} = e^{i\omega} f_n$, the recurrence relation gives $e^{i\omega} A e^{i\omega n} = (1-c)Ae^{i\omega n} + c e^{i\omega} e^{i\omega n}$ which implies $A(\omega) = \frac{c}{1 - (1-c)e^{-i\omega}}$ To calculate the response for sine waves, you can represent the sine function as a linear combination of two exponential functions $x_n = \sin(\omega n) = \frac1{2i}(e^{i\omega n}-e^{-i\om
|
|applications|signal-processing|
| 0
|
Uniqueness of Characterstic Functions in Probability
|
According to Wikipedia , a characteristic function completely determines the properties of a probability distribution. This means it must be unique. However, the definition given is: $$ \text{Char of }X (t)=E[e^itX]$$ Now $e^{iz}$ repeats for every $2 \pi$ increase in $z$. So how can it be unique?
|
It's just a Fourier transform. E(x) is an integral over the probability distribution. The function is unique; if you focus on the value inside the expectation integral, that's not, but so what?
|
|probability|probability-theory|
| 0
|
Simple lowpass frequency response
|
Okay, so hopefully this isn't too hard or off-topic. Let's say I have a very simple lowpass filter (something that smooths out a signal), and the filter object has a position variable and a cutoff variable (between 0 and 1). In every step, a value is put into the following bit of pseudocode as "input": position = position*(1-c)+input*c , or more mathematically, f(n) = f(n-1)*(1-c)+x[n]*c . The output is the value of "position." Basically, it moves a percentage of the distance between the current position and then input value, stores this value internally, and returns it as output. It's intentionally simplistic, since the project I'm using this for is going to have way too many of these in sequence processing audio in real time. Given the filter design, how do I construct a function that takes input frequency (where 1 means a sine wave with a wavelength of 2 samples, and .5 means a sine wave with wavelength 4 samples, and 0 is a flat line), and cutoff value (between 1 and 0, as shown ab
|
I don't have enough mojo to comment on Greg's answer. Greg made a silly calculational mistake: The transfer function $A(\omega)$ should be $c/(1-(1-c)e^{-i\omega})$. What you want is the modulus of $A(\omega)$. Note that $\sin \omega n$ is precisely the imaginary part of $e^{i\omega n}$. Because the relation between input and output is linear, the response to $\sin\omega n$ will be the imaginary part of $A(\omega)e^{i\omega n}$. That's going to be a sinusoid with some shifting and the amplitude $|A(\omega)|$. Here is a plot for $c=1/2$. To read more about this sort of things, google "IIR filter" or "infinite impulse response".
|
|applications|signal-processing|
| 0
|
Uniqueness of Characterstic Functions in Probability
|
According to Wikipedia , a characteristic function completely determines the properties of a probability distribution. This means it must be unique. However, the definition given is: $$ \text{Char of }X (t)=E[e^itX]$$ Now $e^{iz}$ repeats for every $2 \pi$ increase in $z$. So how can it be unique?
|
I thought at first you were asking how the characteristic function can be unique. There's no issue here, because e^ix is a well-defined function: for a given value of x, there is a unique value of e^ix. And so the expectation (an integral) has a unique value as well. On second thought, it appears you're asking how it's possible, considering that e^ix is not injective (i.e., multiple x can have the same value of e^ix), that the original distribution can be completely recovered from the characteristic function. The answer to that is that you're probably missing the "t" in the expression: the characteristic function is a function of t, and although for a given value of t (e.g. t=1), the random variables $X$ and $X+2k\pi/t$ would have the same value of the characteristic function at that point, they would have different values at other t. So the characteristic functions are different, and yes, the distribution can be completely recovered from the characteristic function.
|
|probability|probability-theory|
| 1
|
How do you prove that a group specified by a presentation is infinite?
|
The group: $$ G = \left\langle x, y \; \left| \; x^2 = y^3 = (xy)^7 = 1\right. \right\rangle $$ is infinite, or so I've been told. How would I go about proving this? (To prove finiteness of a finitely presented group, I could do a coset enumeration, but I don't see how this helps if I want to prove that it's infinite.)
|
Another approach, which can't work in general (see Noah's answer), but will surely work in this case, is to find a normal form for each element of the group, and then see whether there are finitely or infinitely many. In practice, that means imagining a word in x and y, and then applying the relations as much as possible to simplify it, and then trying to figure out (and prove!) what the possible different "irreducible words" (i.e. words that can no longer be simplified) are. In your case, the first thing one would note is that x can only appear to the 1st power (since any higher power can be simplified using x^2 = 1), while y can only appear to the powers +1 or -1 (for the same reason). Also, we can't have too many expressions of the form xy or yx in a row, because of the third relation. One can keep going like this. I didn't, but what I imagine is that one can have expressions of the form x y x y^{-1} x y x y^{-1} ... that are arbitrarily long, and inequivalent, explaining the infini
|
|group-theory|group-presentation|geometric-group-theory|infinite-groups|combinatorial-group-theory|
| 0
|
What is your favorite estimation exercise?
|
A fun question I ask students or interviewees (in engineering) is: This is not my question, this is an example: Using only what you know now, how many cans of soda would you estimate are produced per day (on average) in the United States? For this question, the result doesn't matter so much as the process you use. In this theme of estimation, what's your favorite question?
|
"How many estimation questions are asked in interviews across the world during a typical 24h period?"
|
|soft-question|
| 0
|
Solution to $1-f(x) = f(-x)$
|
Can we find $f(x)$ given that $1-f(x) = f(-x)$ for all real $x$? I start by rearranging to: $f(-x) + f(x) = 1$. I can find an example such as $f(x) = |x|$ that works for some values of $x$, but not all. Is there a method here? Is this possible?
|
WolframAlpha provides a solution to this (and many other) recurrence equations: http://www.wolframalpha.com/input/?i=1-f(x)+%3D+f(-x )
|
|algebra-precalculus|functional-equations|
| 0
|
Which average to use? (RMS vs. AM vs. GM vs. HM)
|
The generalized mean (power mean) with exponent $p$ of $n$ numbers $x_1, x_2, \ldots, x_n$ is defined as $$ \bar x = \left(\frac{1}{n} \sum x_i^p\right)^{1/p}. $$ This is equivalent to the harmonic mean, arithmetic mean, and root mean square for $p = -1$, $p = 1$, and $p = 2$, respectively. Also its limit at $p = 0$ is equal to the geometric mean. When should the different means be used? I know harmonic mean is useful when averaging speeds and the plain arithmetic mean is certainly used most often, but I've never seen any uses explained for the geometric mean or root mean square. (Although standard deviation is the root mean square of the deviations from the arithmetic mean for a list of numbers.)
|
I admit I don't really know what type of answer your looking for. So, I'll say something that might very well be entirely irrelevant for your purposes but which I enjoy. At least, it'll provide some context for the power means you asked about. These generalized power means are basically the discrete (finitary) analogs of the L^p norms . So, for instance, it's with these norms that you prove (using, say, elementary calculus) the finitary version of Holder's inequality , which is really important in analysis, because it leads (via a limiting argument) to the more important fact that $L^p$ and $L^q$ spaces (which are continuous analogs of these finitary $l^p$ spaces) are dual for $p,q$ conjugate exponents. This duality is really important: one example is that if you are trying to prove something about the $L^p$ spaces that is preserved under duality, you just have to restrict yourself to the case $1 \leq p \leq 2$. The theory of singular integral operators provides examples of this: basic
|
|average|
| 0
|
What is your favorite estimation exercise?
|
A fun question I ask students or interviewees (in engineering) is: This is not my question, this is an example: Using only what you know now, how many cans of soda would you estimate are produced per day (on average) in the United States? For this question, the result doesn't matter so much as the process you use. In this theme of estimation, what's your favorite question?
|
I like "how many licks does it take to get to the center of a tootsie roll pop".
|
|soft-question|
| 0
|
Motivating Example for Algebraic Geometry/Scheme Theory
|
I am in the process of trying to learn algebraic geometry via schemes and am wondering if there are simple motivating examples of why you would want to consider these structures. I think my biggest issue is the following: I understand (and really like) the idea of passing from a space to functions on a space. In passing from $k^n$ to $R:=k[x_1,\ldots,x_n]$, we may recover the points by looking at the maximal ideas of $R$. But why consider $\operatorname{Spec} R$ instead of $\operatorname{MaxSpec} R$? Why is it helpful to have non-closed points that don't have an analog to points in $k^n$? On a wikipedia article, it mentioned that the Italian school used a (vague) notion of a generic point to prove things. Is there a (relatively) simple example where we can see the utility of non-closed points?
|
To start off, the discussion at Sbseminar has comments from lots of people who actually know algebraic geometry, and if anything I say contradicts something they say, please trust them and not me. One reason is that you lose the functoriality of $Spec$ if you stick to $MaxSpec$: the inverse image of a maximal ideal is not necessarily maximal. Nevertheless, if you stick to schemes of finite type over a field, this is true (it's basically a version of the Nullstellensatz). In particular, in Serre's FAC paper he defines a "variety" by gluing together regular affine algebraic sets in the sense of classical algebraic geometry. But this is less general. One natural example of a scheme which is not of finite type over a field is simply $Spec \mathbb{Z}$. Then given a scheme $X$ over this (well, admittedly every scheme $X$ is a scheme over $Spec \mathbb{Z}$ in a canonical way), the fibers at the non-closed point of $Spec \mathbb{Z}$ is still interesting and basically amounts to studying polyno
|
|algebraic-geometry|
| 0
|
Motivating Example for Algebraic Geometry/Scheme Theory
|
I am in the process of trying to learn algebraic geometry via schemes and am wondering if there are simple motivating examples of why you would want to consider these structures. I think my biggest issue is the following: I understand (and really like) the idea of passing from a space to functions on a space. In passing from $k^n$ to $R:=k[x_1,\ldots,x_n]$, we may recover the points by looking at the maximal ideas of $R$. But why consider $\operatorname{Spec} R$ instead of $\operatorname{MaxSpec} R$? Why is it helpful to have non-closed points that don't have an analog to points in $k^n$? On a wikipedia article, it mentioned that the Italian school used a (vague) notion of a generic point to prove things. Is there a (relatively) simple example where we can see the utility of non-closed points?
|
See my answer here for a brief discussion of how points that are closed in one optic (rational solutions to a Diophantine equation, which are closed points on the variety over $\mathbb Q$ attached to the Diophantine equation) become non-closed in another optic (when we clear denominators and think of the Diophantine equation as defining a scheme over $\mathbb Z$). In terms of rings (and connecting to Qiaochu's answer), under the natural map $\mathbb Z[x_1,...,x_n] \to \mathbb Q[x_1,...,x_n]$, the preimage of maximal ideals are prime, but not maximal. These examples may give impression that non-closed points are most important in arithmetic situations, but actually that is not the case. The ring $\mathbb C[t]$ behaves much like $\mathbb Z$, and so one can have the same discussion with $\mathbb Z$ and $\mathbb Q$ replaced by $\mathbb C[t]$ and $\mathbb C(t)$. Why would one do this? Well, suppose you have an equation (like $y^2 = x^3 + t$) which you want to study, where you think of $t$ a
|
|algebraic-geometry|
| 0
|
What is a real number (also rational, decimal, integer, natural, cardinal, ordinal...)?
|
In mathematics, there seem to be a lot of different types of numbers. What exactly are: Real numbers Integers Rational numbers Decimals Complex numbers Natural numbers Cardinals Ordinals And as workmad3 points out, some more advanced types of numbers (I'd never heard of) Hyper-reals Quaternions Imaginary numbers Are there any other types of classifications of a number I missed?
|
There are many types of numbers, though the natural numbers, the integers, the rational, the decimal, the real, and the complex form a nice self-complete expository whole. Hopefully, the following block(s) of text aren't too poorly formatted. 1) The set of natural numbers consists of numbers with which we count {0,1,2,3,...}. As noted in some of the other answers, some people think that 0 is not a natural number (see one of my desktop backgrounds ). Whether or not it is, it's a matter of taste. What is not a matter of taste are the defining properties of the natural numbers. In particular, there is a (binary) operation called +, which takes two numbers a and b and spits out a third number a+b, which is the sum of a and b. It satisfies the usual properties that you would expect from counting: it is commutative (the order of the summands doesn't matter, i.e. a+b=b+a) and associative (the order in which you add summands to each other doesn't matter, i.e. (a+b)+c=a+(b+c), so you can always
|
|terminology|definition|number-systems|
| 0
|
Is there a known mathematical equation to find the nth prime?
|
I've solved for it making a computer program, but was wondering there was a mathematical equation that you could use to solve for the nth prime?
|
No such formula is known, but there are a few that give impressive results. A famous one is Euler's: $$P(n) = n^2 − n + 41$$ Which yields a prime for every natural number lower than $41$, though not necessarily the $n$th prime. See more here .
|
|number-theory|prime-numbers|
| 0
|
Is there a known mathematical equation to find the nth prime?
|
I've solved for it making a computer program, but was wondering there was a mathematical equation that you could use to solve for the nth prime?
|
No, there is no known formula that gives the nth prime, except artificial ones you can write that are basically equivalent to "the $n$ th prime". But if you only want an approximation, the $n$ th prime is roughly around $n \ln n$ (or more precisely, near the number $m$ such that $m/\ln m = n$ ) by the prime number theorem . In fact, we have the following asymptotic bound on the $n$ th prime $p_n$ : $n \ln n + n(\ln\ln n - 1) for $n\ge{}6$ You can sieve within this range if you want the $n$ th prime. [Edit: Using more accurate estimates you'll have a much smaller range to sieve; see the answer by Charles.] Entirely unrelated: if you want to see formulae that generate a lot of primes (not the $n$ th prime) up to some extent, like the famous $f(n)=n^2-n+41$ , look at the Wikipedia article formula for primes , or Mathworld for Prime Formulas .
|
|number-theory|prime-numbers|
| 1
|
Proving the Shoelace Method at the Precalculus Level
|
Using only precalculus mathematics (including that the area of the triangle with vertices at the origin, $(x_1,y_1)$, and $(x_2,y_2)$ is half of the absolute value of the determinant of the $2\times 2$ matrix of the vertices $(x_1,y_1)$ and $(x_2,y_2)$, $\frac{1}{2}\cdot\left|x_1\cdot y_2 - x_2\cdot y_1\right|$) how can one prove that the shoelace method works for all non-self-intersecting polygons?
|
One way is to note that $x_1y_2 - x_2y_1$ is a signed area, i.e. it may positive or negative. Adding up all the signed areas of the triangles formed by the points $O$ , $P_k$ and $P_{k+1}$ will cancel all the superfluous parts, as can be seen from the sketch: The same argument also works for trapezoids with the $x$ -axis instead of triangles with the origin. (I think this is the most illuminating argument, because the key trick is to give the area a sign depending on orientation.) One could argue that this is not very rigorous, however. A more rigorous proof is to divide the polygon into two smaller polygons (it's not trivial to show that this is possible) and argue that adding the shoelace sums of the two parts gives the shoelace sum of the whole. That's because the two additional terms for the extra side cancel each other. (This cancellation is well-known from line integrals, we are in essence calculating $\frac12 \oint_{polygon} x\,dy - y\,dx$ here.) By induction, you then only have
|
|geometry|algebra-precalculus|contest-math|analytic-geometry|
| 1
|
Non-integer powers of negative numbers
|
Roots behave strangely over complex numbers . Given this, how do non-integer powers behave over negative numbers? More specifically: Can we define fractional powers such as $(-2)^{-1.5}$? Can we define irrational powers $(-2)^\pi$?
|
Below is my geometric understanding of why irrational powers of negatives are difficult to define. As such it is probably not rigorous and may be wrong. The way irrational powers of the real numbers are usually defined is by limits of fractional powers. For complex numbers, the same is true, except the limiting process is more complicated. We can of course coast as usual on our real numbers result and see that we only need to define irrational powers on the unit circle, since every other points is some positive real multiple of a point on the unit circle. Now in general z -> z^n wraps the circle around itself n times. What does z->z^(1/n) do? Well, it's not clear since each point has n possible points it could have come from, in particular if you partition the circle into n arcs of length 2pi/n, each of those gets mapped to the full circle. Once you choose a starting arc though, z -> z^m maps the starting arc to other arcs in the following way. Partition your starting arc of length 2pi
|
|complex-numbers|exponentiation|
| 0
|
Is there a known mathematical equation to find the nth prime?
|
I've solved for it making a computer program, but was wondering there was a mathematical equation that you could use to solve for the nth prime?
|
There are formulas on Wikipedia, though they are messy. No polynomial $p(x)$ can output the $n$th prime for all $n$, as is explained in the first section of the article. There is, however, a polynomial in 26 variables whose nonnegative values are precisely the primes. (This is fairly useless as far as computation is concerned.) This comes from the fact that the property of being a prime is decidable, and the theorem of Matiyasevich .
|
|number-theory|prime-numbers|
| 0
|
Twenty questions against a liar
|
Here's one that popped into my mind when I was thinking about binary search. I'm thinking of an integer between 1 and n . You have to guess my number. You win as soon as you guess the correct number. If your guess is not correct, I'll give you a hint by saying "too high" or "too low". What's your best strategy? This is an easy problem if I always tell the truth: by guessing the (rounded) mean of the lower bound and the upper bound, you can find my number in roughly log 2 n guesses at most. But what if I'm allowed to cheat once? What is your best strategy then? To clarify: if you guess my number, you always win instantly. But I'm allowed, at most once, to tell you "too high" when your guess is actually too low, or the opposite. I can also decide not to lie. Here's a rough upper bound: you can ask each number twice to make sure I'm not cheating, and if I ever give two different answers, just ask a third time and from then on, play the regular game. In this way, you can win with about 2 l
|
There is a fairly simple strategy which requires (1 + ε)log(n) + 1/ε + O(1) queries, for any constant ε > 0. I illustrate this strategy below. First, I ask you whether or not X, the secret number, is n/2. Without loss of generality, suppose you answer "less". I learn nothing at first, because you may be lying; but for the moment I give you the benefit of the doubt. I next ask you whether X is n/4. If you say "less", I don't know whether or not 0 do know that 0 preceding query was honest. So we may reduce to the case where you say "more" to my query of whether X is n/4. If I continue to take you at your word, persue a binary search, and enquire about whether X is 3n/8, you may say either "more" or "less". If you say "more", then I don't know whether X > n/2, but I do know that X > n/4, again because you can't lie twice. So again so long as you continue to answer "more" in my normal binary search, I know that your answer to the preceding query was honest. More generally: if I consistentl
|
|algorithms|computer-science|discrete-mathematics|searching|
| 1
|
Non-integer powers of negative numbers
|
Roots behave strangely over complex numbers . Given this, how do non-integer powers behave over negative numbers? More specifically: Can we define fractional powers such as $(-2)^{-1.5}$? Can we define irrational powers $(-2)^\pi$?
|
As other posters have indicated, the problem is that the complex logarithm isn't well-defined on $\mathbb{C}$. This is related to my comments in a recent question about the square root not being well-defined (since of course $\sqrt{z} = e^{ \frac{\log z}{2} }$). One point of view is that the complex exponential $e^z : \mathbb{C} \to \mathbb{C}$ does not really have domain $\mathbb{C}$. Due to periodicity it really has domain $\mathbb{C}/2\pi i \mathbb{Z}$. So one way to define the complex logarithm is not as a function with range $\mathbb{C}$, but as a function with range $\mathbb{C}/2\pi i \mathbb{Z}$. Thus for example $\log 1 = 0, 2 \pi i, - 2 \pi i, ...$ and so forth. So what are we doing when we don't do this? Well, let us suppose that for the time being we have decided that $\log 1 = 0$. This is how we get other values of the logarithm: using power series, we can define $\log (1 + z)$ for any $z$ with $|z| that number to get a different power series whose circle of convergence is
|
|complex-numbers|exponentiation|
| 1
|
Solving systems of equations in roots of unity
|
In part of my research, the following problem has come up. Consider the system of equations (in complex numbers) $$z^b w^c = 1,\quad z^d w^e = 1.$$ I am interested in the solution set when we restrict both $z$ and $w$ to be $a^{\textrm{th}}$ roots of unity, for some positive integer $a$. Of course, one immediately sees that $(z, w) = (1, 1)$ is a solution. What are some nice necessary and sufficient conditions on $a, b, c, d,$ and $e$ which guarantee that $(z, w) = (1, 1)$ is the ONLY solution? To give an idea of the flavor of answer I'd be most happy with, one must have $\textrm{gcd}(a, b, d) = \textrm{gcd}(c, e) = 1$, because if $z$ is any $\textrm{gcd}(a, b, d)^{\textrm{th}}$ root of $1$ (which is neccesarily an $a^{\textrm{th}}$ root of $1$), then $(z, 1)$ is a solution to both equations. It also turns out that $z$ and $w$ must both be $\textrm{gcd}(a, be - cd)^{\textrm{th}}$ roots of $1$. I'd love to have an answer like "$\textrm{gcd}(a, be - cd) = \textrm{gcd}(a, b, d) = \textrm{
|
You are computing the nullspace of a $2 \times 2$ matrix over $\mathbb{Z}/a\mathbb{Z}$, so a necessary and sufficient condition is that the matrix in question is invertible, hence $\gcd(a, be-dc) = 1$ should be necessary and sufficient. In the second case you should compute the Smith normal form of your matrix over $\mathbb{Z}$. I believe a necessary and sufficient condition is that the Smith normal form is [[1 0][0 1][0 0]] (looks like matrices aren't working yet).
|
|linear-algebra|number-theory|
| 1
|
Sum of Gaussian Variables
|
Let's say I know $X$ is a Gaussian variable. Moreover, I know $Y$ is a Gaussian variable and $Y=X+Z$ . Let $X$ and $Z$ be independent. How can I prove that $Y$ is a Gaussian random variable if and only if $Z$ is a Gaussian R.V.? It's easy to show the other way around ( $X$ and $Z$ are orthogonal and normal, hence create a Gaussian vector hence any linear combination of the two is a Gaussian variable). Thanks
|
Your question: given that X and Z are independent, X is Gaussian (I'll use "normal"), and Y = X+Z, prove that Y is normal iff Z is normal. Right? As you observed, one direction is easy: if Z is normal, then so is Y=X+Z. So for the other direction, assume that Y is normal. We need to prove that is Z normal too. Perhaps there's an even easier way, but it's straightforward to use characteristic functions , which completely characterise distributions. Because X and Z are independent, $ \varphi_Y(t) = E[e^{itY}] = E[e^{it(X+Z)}] = E[e^{itX}]E[e^{itZ}]$, and so, $ \varphi_Z(t) = E[e^{itZ}] = E[e^{itY}]/E[e^{itX}] $ This means that Z has exactly the right characteristic function for a normal variable, and hence it's normal. More interestingly and much more generally, there is a theorem of Cramer (e.g. see here ) which says that if X and Z are independent and X+Z is normally distributed, then both X and Z are!
|
|probability-theory|
| 1
|
Motivating Example for Algebraic Geometry/Scheme Theory
|
I am in the process of trying to learn algebraic geometry via schemes and am wondering if there are simple motivating examples of why you would want to consider these structures. I think my biggest issue is the following: I understand (and really like) the idea of passing from a space to functions on a space. In passing from $k^n$ to $R:=k[x_1,\ldots,x_n]$, we may recover the points by looking at the maximal ideas of $R$. But why consider $\operatorname{Spec} R$ instead of $\operatorname{MaxSpec} R$? Why is it helpful to have non-closed points that don't have an analog to points in $k^n$? On a wikipedia article, it mentioned that the Italian school used a (vague) notion of a generic point to prove things. Is there a (relatively) simple example where we can see the utility of non-closed points?
|
Here's a simple intersection theoretic example. Take the intersection of the line $y=0$ and the parabola $y=x^2$. Classically, the intersection is a point. But note that there is more to the intersection than just the point; there is the fact that the two curves are tangent at that point. Scheme-theoretically, the intersection is $\operatorname{Spec} k[x,y]/(y,y-x^2) \cong \operatorname{Spec} k[x]/(x^2)$. This reflects the tangency. If the intersection were transverse, then the scheme-theoretic intersection would have been just $\operatorname{Spec} k$. Higher order tangencies can be seen in the scheme-theoretic intersection as well; for example, repeat this exercise with $y=x^3$ in place of $y=x^2$.
|
|algebraic-geometry|
| 0
|
Which average to use? (RMS vs. AM vs. GM vs. HM)
|
The generalized mean (power mean) with exponent $p$ of $n$ numbers $x_1, x_2, \ldots, x_n$ is defined as $$ \bar x = \left(\frac{1}{n} \sum x_i^p\right)^{1/p}. $$ This is equivalent to the harmonic mean, arithmetic mean, and root mean square for $p = -1$, $p = 1$, and $p = 2$, respectively. Also its limit at $p = 0$ is equal to the geometric mean. When should the different means be used? I know harmonic mean is useful when averaging speeds and the plain arithmetic mean is certainly used most often, but I've never seen any uses explained for the geometric mean or root mean square. (Although standard deviation is the root mean square of the deviations from the arithmetic mean for a list of numbers.)
|
An important special case of the AM-GM inequality is that the product of two (positive) numbers with a constant sum is at a maximum when they are equal. This comes up a lot.
|
|average|
| 0
|
genericness and the Zariski topology
|
What does it mean (in a mathematically rigorous way) to claim something is "generic?" How does this coincide with the Zariski topology?
|
In general*, if something is "generic", it means it happens or is true "almost all of the time" or "almost everywhere". In measure theory, for example, when you say "$P(x)$ is true for almost all $x$", this has the precise meaning that the set of $x$'s for which $P(x)$ does not hold has measure zero. One can relate this to the Zariski topology via the fact that Zariski closed subsets of $\mathbb{C}^n$ have Lebesgue measure zero: https://mathoverflow.net/questions/25513/zariski-closed-sets-in-cn-are-of-measure-0 See also these MO posts: https://mathoverflow.net/questions/19688/what-does-generic-mean-in-algebraic-geometry https://mathoverflow.net/questions/2162/what-are-the-most-important-instances-of-the-yoga-of-generic-points *or perhaps I should say... generically ;-)
|
|general-topology|algebraic-geometry|intuition|
| 0
|
What is the name for a shape that is like a capsule, but with two different radii?
|
I'm looking for the name of a shape that is like a capsule, but where each circle can have different radii. The shape could be described using two circles (two centers and two radii). Something like this (this is a 2D shape, not a lame attempt at 3D):
|
You can just say it is the convex hull of two circles.
|
|geometry|terminology|circles|
| 1
|
What is a Markov Chain?
|
What is an intuitive explanation of Markov chains, and how they work? Please provide at least one practical example.
|
I had a programming project in college where we generated large amounts of psuedo-English text using Markov chains. The assignment is here , although I don't know if that link will be good forever. From that page: For example, suppose that [our Markov chains are of length] 2 and the sample file contains I like the big blue dog better than the big elephant with the big blue hat on his tusk. Here is how the first three words might be chosen: A two-word sequence is chosen at random to become the initial prefix. Let's suppose that "the big" is chosen. The first word must be chosen based on the probability that it follows the prefix (currently "the big") in the source. The source contains three occurrences of "the big". Two times it is followed by "blue", and once it is followed by "elephant". Thus, the next word must be chosen so that there is a 2/3 chance that "blue" will be chosen, and a 1/3 chance that "elephant" will be chosen. Let's suppose that we choose "blue" this time. The next wo
|
|probability-theory|stochastic-processes|terminology|markov-chains|intuition|
| 0
|
Area of a triangle from some of its parts
|
I found this question a while ago on a SAT practice exam or something, can't quite remember. So given an acute triangle $ABC$ with $P$ a point inside it and $AP$, $BP$, and $CP$ meeting the opposite sides at $D$, $E$, and $F$ respectively: How can you find the area of triangle $ABC$ given the areas of triangles $x$, $y$, and $z$?
|
This was not likely to have been an SAT practice problem, though it is a typical contest problem. $AE:EC = x:y$ (since those two triangles have the same altitude to $AC$, the ratios of their areas is the ratios of their bases with respect to that altitude) and $AP:PD = (x+y):z$ (same idea as $AE:EC$). Knowing these two ratios, apply the technique of mass points , putting masses $zy$ at $A, zx$ at $C$ (gives the ratio $x:y$ for $AE:EC$), and $y(x+y)$ at $D$ (gives $(x+y):z$ for $AP:PD$). This results in a mass of $y(x+y)-zx$ at $B$, so the ratio $BD:DC = zx:(y(x+y)-zx).$ This must also be the ratio of the areas of △ABD to △ADC (common altitude again), so (area of △ABD):$(x+y+z) = zx:(y(x+y)-zx)$. Solving from there is a matter of bashing out the algebra.
|
|geometry|contest-math|
| 1
|
How can I understand and prove the "sum and difference formulas" in trigonometry?
|
The " sum and difference " formulas often come in handy, but it's not immediately obvious that they would be true. \begin{align} \sin(\alpha \pm \beta) &= \sin \alpha \cos \beta \pm \cos \alpha \sin \beta \\ \cos(\alpha \pm \beta) &= \cos \alpha \cos \beta \mp \sin \alpha \sin \beta \end{align} So what I want to know is, How can I prove that these formulas are correct? More importantly, how can I understand these formulas intuitively? Ideally, I'm looking for answers that make no reference to Calculus, or to Euler's formula , although such answers are still encouraged, for completeness.
|
The key fact here is that rotation is a linear transformation, e.g. the rotation of $u + v$ is the rotation of $u$ plus the rotation of $v$. You should draw a diagram that shows this carefully if you don't believe it. That means a rotation is determined by what it does to $(1, 0)$ and to $(0, 1)$. But $(1, 0)$ rotated by $\theta$ degrees counterclockwise is just $(\cos \theta, \sin \theta)$, whereas $(0, 1)$ rotated by $\theta$ degrees counterclockwise is just $(-\sin \theta, \cos \theta)$. (Again, draw a diagram.) That means a rotation by $\theta$ is given by a $2 \times 2$ matrix with those entries. (Matrices don't work here yet.) So take a rotation by $\theta$ and another one by $\theta'$, and multiply the corresponding matrices. What you get is the sine and cosine angle addition formulas. (The connection to complex numbers is that one can represent complex numbers as $2 \times 2$ real matrices.) Also, if you believe that $a \cdot b = |a| |b| \cos \theta$, this implies the cosine an
|
|intuition|trigonometry|
| 1
|
How can I understand and prove the "sum and difference formulas" in trigonometry?
|
The " sum and difference " formulas often come in handy, but it's not immediately obvious that they would be true. \begin{align} \sin(\alpha \pm \beta) &= \sin \alpha \cos \beta \pm \cos \alpha \sin \beta \\ \cos(\alpha \pm \beta) &= \cos \alpha \cos \beta \mp \sin \alpha \sin \beta \end{align} So what I want to know is, How can I prove that these formulas are correct? More importantly, how can I understand these formulas intuitively? Ideally, I'm looking for answers that make no reference to Calculus, or to Euler's formula , although such answers are still encouraged, for completeness.
|
There are several typical derivations used in high school texts. Here's one: diagram http://www.imgftw.net/img/400545892.png Take two points on the unit circle, one a rotation of (1,0) by α, the other a rotation of (1,0) by β. Their coordinates are as shown in the diagram. Let c be the length of the segment joining those two points. By the Law of Cosines (on the blue triangle), $c^2=1^2+1^2-2\cdot1\cdot1\cdot\cos(\alpha-\beta)$ . Using the distance formula, $c=\sqrt{(\cos\alpha-\cos\beta)^2+(\sin\alpha-\sin\beta)^2}$ . Squaring the latter and setting the two equal, $1^2+1^2-2\cdot1\cdot1\cdot\cos(\alpha-\beta)=(\cos\alpha-\cos\beta)^2+(\sin\alpha-\sin\beta)^2$ . Simplifying both sides, $2-2\cos(\alpha-\beta)=\cos^2\alpha-2\cos\alpha\cos\beta+\cos^2\beta+\sin^2\alpha-2\sin\alpha\sin\beta+\sin^2\beta$ $=2-2\cos\alpha\cos\beta-2\sin\alpha\sin\beta$ (using the Pythagorean identity). Solving for $\cos(\alpha-\beta)$ , $\cos(\alpha-\beta)=\cos\alpha\cos\beta+\sin\alpha\sin\beta$ . From this
|
|intuition|trigonometry|
| 0
|
How can I understand and prove the "sum and difference formulas" in trigonometry?
|
The " sum and difference " formulas often come in handy, but it's not immediately obvious that they would be true. \begin{align} \sin(\alpha \pm \beta) &= \sin \alpha \cos \beta \pm \cos \alpha \sin \beta \\ \cos(\alpha \pm \beta) &= \cos \alpha \cos \beta \mp \sin \alpha \sin \beta \end{align} So what I want to know is, How can I prove that these formulas are correct? More importantly, how can I understand these formulas intuitively? Ideally, I'm looking for answers that make no reference to Calculus, or to Euler's formula , although such answers are still encouraged, for completeness.
|
Though the standard high-school derivations are not the most useful way to remember it in the long run, here's another one which I like because you can "see" it directly without much algebra. Let P be the point on the unit circle got by rotating (1,0) by angle α+β. Drop a perpendicular N to the α-rotated line, and R to the x-axis. So from the right triangle ONP, you see ON = cos β . You can see that the angle RPN is α too: it's the complement of ∠PNQ, and so is ∠QNO = α. Now, $\sin(\alpha + \beta) = \mbox{PR} = \mbox{PQ} + \mbox{QR} = \sin(\beta)\cos(\alpha) + \cos(\beta)\sin(\alpha)$, and $\cos(\alpha + \beta) = \mbox{OR} = \mbox{OM} - \mbox{RM} = \cos(\beta)\cos(\alpha) - \sin(\beta)\sin(\alpha)$.
|
|intuition|trigonometry|
| 0
|
How can I understand and prove the "sum and difference formulas" in trigonometry?
|
The " sum and difference " formulas often come in handy, but it's not immediately obvious that they would be true. \begin{align} \sin(\alpha \pm \beta) &= \sin \alpha \cos \beta \pm \cos \alpha \sin \beta \\ \cos(\alpha \pm \beta) &= \cos \alpha \cos \beta \mp \sin \alpha \sin \beta \end{align} So what I want to know is, How can I prove that these formulas are correct? More importantly, how can I understand these formulas intuitively? Ideally, I'm looking for answers that make no reference to Calculus, or to Euler's formula , although such answers are still encouraged, for completeness.
|
I remember that $e^{i\alpha}=\cos\alpha+i\sin\alpha$ and that $i^2=-1$. Both these relations are useful in many other situations and pretty fundamental to understanding complex numbers. Then your equalities are the real and, respectively, the imaginary part of $e^{i(\alpha+\beta)}=e^{i\alpha}e^{i\beta}$. This is not very different from the other answers, but I actually prefer the algebra perspective. The only place where I think geometrically is in interpreting $e^{i\alpha}=\cos\alpha+i\sin\alpha$ by thinking of the unit circle in the complex plane.
|
|intuition|trigonometry|
| 0
|
Is this version of the Hanoi towers problem NP-complete?
|
This was really inspired by Solitaire, but a few people reacted with ``oh, it's like the towers of Hanoi, isn't it?'' so I'll try to pose the problem in terms of discs here. Let's start. There are n disks on the real line, one of size 1 at position $x_1$, one of size 2 at position $x_2$, ..., and one of size n at position $x_n$. Your goal is to make a tower with all n discs, consuming as little energy as possible in the process. You are allowed to move a tower whose base is a disk of size k only on top of the disk with size k+1 (which may be the top of another mini-tower). The energy you consume to perform such a move is the distance traveled by the moved mini-tower. For example, the energy consumed by the first move is $|x_k-x_{k+1}|$. Now, you'd like to write a program that tells you whether the energy you have is enough to perform the task. It just needs to say Y or N. (If the answer is Y, then clearly the list of moves is proof enough that the answer is correct, so the problem is N
|
Here's a polynomial time solution given to me by Javi Gomez. Let (i,j) with $i\le j$ represent the situation where disks i, i+1, ..., j are on top of each other in position $x_j$ , and let E(i,j) represent the energy needed to obtain that configuration. Clearly E(i,i) is zero for all i. Also, $E(i,j)=\min_k E(i,k)+E(k+1,j)+|x_k-x_j|$ . What we want is E(1,n). (I made this a community wiki since the answer wasn't really found by me. Feel free to edit.)
|
|algorithms|recreational-mathematics|np-complete|
| 0
|
How to test if a point is inside the convex hull of two circles?
|
Following my previous question , I'm wondering how I can determine if a point is within the convex hull of two circles (a boolean value). There's no problem testing if the point is in either of the two circles, but it can also be "between" them and I don't know how to test for that. Seeing Wolfram MathWorld's article on Circle-Circle Tangeants , it seems that an inequation that tests if the point is on the internal side the two external circle tangeants would do the trick, but I'm afraid my solving skills are too far away to be able to turn the tangeant equations into a fitting inequality. I'm defining the convex hull of two circles using both centers and radii.
|
The solution to this problem is indeed to check whether the point is in one of the circles or in the isosceles trapezoid determined by the points were circles touch the tangent lines. However, without some care the equations get messy. Let's start with a circle at (x1', y1') of radius r1', a circle at (x2', y2') with radius r2', and a point (x', y') to test. We may assume that r1' is at least r2'. By shifting followed by rotation followed by scaling this can be transformed into a simpler instance: a circle at (0,0) of radius r1, a circle at (1,0) of radius r2, and a point (x,y) to test. We have: r1=r1'/D, r2=r2'/D, $x=((y'-y'_1)\sin\alpha+(x'-x'_1)\cos\alpha)/D$ $y=((y'-y'_1)\sin\alpha-(x'-x'_1)\cos\alpha)/D$ where $D^2=(x'_1-x'_2)^2+(y'_1-y'_2)^2$ and $\alpha$ is the angle where (x2'-x1',y2'-y1') lies. (In C, there's a function atan2 that takes the two coordinates of a point and gives the angle. In math, atan doesn't really distinguish between points symmetric wrt (0,0).) If the point
|
|geometry|
| 0
|
Concrete examples of valuation rings of rank two.
|
Let $A$ be a valuation ring of rank two. Then $A$ gives an example of a commutative ring such that $\mathrm{Spec}(A)$ is a noetherian topological space, but $A$ is non-noetherian. (Indeed, otherwise $A$ would be a discrete valuation ring.) Is there a concrete example of such a ring $A$?
|
Take the ring of formal power series (over $\mathbb{C}$, say) with exponents in $\mathbb{Z}^2$ under lex order. Edit: As Robin Chapman mentions, one must be careful about what this means. The precise construction for any totally ordered abelian group is described in the Wikipedia article .
|
|abstract-algebra|commutative-algebra|valuation-theory|
| 0
|
Concrete examples of valuation rings of rank two.
|
Let $A$ be a valuation ring of rank two. Then $A$ gives an example of a commutative ring such that $\mathrm{Spec}(A)$ is a noetherian topological space, but $A$ is non-noetherian. (Indeed, otherwise $A$ would be a discrete valuation ring.) Is there a concrete example of such a ring $A$?
|
Qiaochu's answer is sound in principle, but in practice one needs to be more careful with the definition of the ring. The quotient field $K$ of $A$ consists of formal Laurent series of the form $$f=\sum_{r=-r_0}^\infty x^r\sum_{s=-s_0(r)}^\infty a_{r,s}y^s.$$ Here $r_0$ is an integer and for each integer $r$, $s_0(r)$ is an integer (depending on $r$). So these are the power series where the powers of $x$ are bounded below and for each integer $r$ the coefficient of $x^r y^s$ is zero for all $s$ below a bound depending on $r$. This complicated-looking condition ensures that the product of two elements of $K$ is also an element of $K$ (note that one cannot multiply two general Laurent series). Then $A$ will consist of all such series with the additional conditions that $r_0=0$ and $s_0(0)=0$. The valuation of an element $f$ is the least $(r,s)$ under lexicographic ordering with $a_{r,s}\ne0$. Here the ordering is $(r,s) A more high-brow interpretation of the condition for memebership of
|
|abstract-algebra|commutative-algebra|valuation-theory|
| 1
|
What is the most elegant proof of the Pythagorean theorem?
|
The Pythagorean Theorem is one of the most popular to prove by mathematicians, and there are many proofs available (including one from James Garfield ). What's the most elegant proof? My favorite is this graphical one: According to cut-the-knot: Loomis (pp. 49-50) mentions that the proof "was devised by Maurice Laisnez, a high school boy, in the Junior-Senior High School of South Bend, Ind., and sent to me, May 16, 1939, by his class teacher, Wilson Thornton." The proof has been published by Rufus Isaac in Mathematics Magazine, Vol. 48 (1975), p. 198.
|
More than dissection proofs, I find the proof using similarity most enlightening and intuitive. See post by Terence Tao : drop a perpendicular from C to the hypotenuse AB. In the figure, the triangles whose are areas are marked x and y are similar to the original triangle (which has area x+y). So accepting that areas of similar right-angled triangles are proportional to the squares of the hypotenuse, x:y:x+y are in ratio a 2 :b 2 :c 2 , which is Pythagoras's theorem. See also the linked discussion by Alexander Giventhal where he remarks that this proof is more general than tiling or dissection proofs, and is even proved by Euclid. If you think of a 2 +b 2 =c 2 as the geometrical result that the sum of areas of squares constructed with sides a and b is the area of a square placed on c, then the Pythagorean theorem is true not just for constructing squares on the sides, but any similar figures. For instance, as Euclid himself proves, something like the following is true (though it's stil
|
|geometry|algebra-precalculus|euclidean-geometry|triangles|big-list|
| 0
|
Are there more rational numbers than integers?
|
I've been told that there are precisely the same number of rationals as there are of integers. The set of rationals is countably infinite, therefore every rational can be associated with a positive integer, therefore there are the same number of rationals as integers. I've ignored sign-related issues, but these are easily handled. To count the rationals, consider sets of rationals where the denominator and numerator are positive and sum to some constant. If the constant is 2 there's 1/1. If the constant is 3, there's 1/2 and 2/1. If the constant is 4 there's 1/3, 2/2 and 3/1. So far we have counted out 6 rationals, and if we continue long enough, we will eventually count to any specific rational you care to mention. The trouble is, I find this very hard to accept. I have two reasons. First, this logic seems to assume that infinity is a finite number. You can count to and number any rational, but you cannot number all rationals. You can't even count all positive integers. Infinity is co
|
You can think about it a different way. Consider the set of real numbers between 0 and 1, and then the set of real numbers between 0 and 2. By intuition, it seems that the set of real numbers between 0 and 2 has double the size of the set between 0 and 1. However, this is not the case, because the two sets have the same cardinality . Consider the function $f(x) = 2x$. Every real between 0 and 1 is bijected to a real between 0 and 2. Therefore the sets are of the same size.
|
|elementary-set-theory|infinity|
| 0
|
What is the most elegant proof of the Pythagorean theorem?
|
The Pythagorean Theorem is one of the most popular to prove by mathematicians, and there are many proofs available (including one from James Garfield ). What's the most elegant proof? My favorite is this graphical one: According to cut-the-knot: Loomis (pp. 49-50) mentions that the proof "was devised by Maurice Laisnez, a high school boy, in the Junior-Senior High School of South Bend, Ind., and sent to me, May 16, 1939, by his class teacher, Wilson Thornton." The proof has been published by Rufus Isaac in Mathematics Magazine, Vol. 48 (1975), p. 198.
|
Not a proof in itself, but the book The Pythagorean Proposition by Loomis has probably the most comprehensive collection of proofs of the Pythagorean theorem.
|
|geometry|algebra-precalculus|euclidean-geometry|triangles|big-list|
| 0
|
Are there more rational numbers than integers?
|
I've been told that there are precisely the same number of rationals as there are of integers. The set of rationals is countably infinite, therefore every rational can be associated with a positive integer, therefore there are the same number of rationals as integers. I've ignored sign-related issues, but these are easily handled. To count the rationals, consider sets of rationals where the denominator and numerator are positive and sum to some constant. If the constant is 2 there's 1/1. If the constant is 3, there's 1/2 and 2/1. If the constant is 4 there's 1/3, 2/2 and 3/1. So far we have counted out 6 rationals, and if we continue long enough, we will eventually count to any specific rational you care to mention. The trouble is, I find this very hard to accept. I have two reasons. First, this logic seems to assume that infinity is a finite number. You can count to and number any rational, but you cannot number all rationals. You can't even count all positive integers. Infinity is co
|
The cardinality of the set of rationals is the same as the cardinality of the integers is the same as the cardinality of the natural numbers. When we count a finite set of elements, we are constructing a one-one map from the set onto a finite initial segment of the natural numbers. If we want to know if two finite sets have the same cardinality (are equi-cardinal) we can either: 1) count both sets and see if we get the same number, or 2) attempt to construct a one-one map from one set onto the other. If we can construct the map aimed at in (2), then the sets are equi-cardinal. Generalizing that procedure from the finite sets to arbitrary sets, we get that for any two sets, the sets have the same cardinality (are equi-cardinal) if there exists a bijection (a one-one map between the sets that is onto the target rather than merely into). For the finite case, if there is a one-one map that is a bijection, all one-one maps are bijective. That is not the case for infinite sets, which is the
|
|elementary-set-theory|infinity|
| 0
|
Are there more rational numbers than integers?
|
I've been told that there are precisely the same number of rationals as there are of integers. The set of rationals is countably infinite, therefore every rational can be associated with a positive integer, therefore there are the same number of rationals as integers. I've ignored sign-related issues, but these are easily handled. To count the rationals, consider sets of rationals where the denominator and numerator are positive and sum to some constant. If the constant is 2 there's 1/1. If the constant is 3, there's 1/2 and 2/1. If the constant is 4 there's 1/3, 2/2 and 3/1. So far we have counted out 6 rationals, and if we continue long enough, we will eventually count to any specific rational you care to mention. The trouble is, I find this very hard to accept. I have two reasons. First, this logic seems to assume that infinity is a finite number. You can count to and number any rational, but you cannot number all rationals. You can't even count all positive integers. Infinity is co
|
In mathematics a set is called infinite if it can be put into a 1-1 correspondence with a proper subset of it, and finite it is not infinite. (I know it seems crazy to have the concept of infinite as primitive and finite as a derivate, but it's simpler to do this, since otherwise you must assume that the integers exist before saying that a set is finite) As for your remarks: - with your method (if you don't forget to throw out fractions like 4/6 which is equal to 2/3) you actually counted the rationals, since for each number you have a function which associates it to a natural number. It's true that you cannot count ALL rationals, or all integers; but you cannot either draw a whole straight line, can you? - with infinite sets you may build infinite mappings, but you just need a single 1-1 mapping to show that two sets are equal.
|
|elementary-set-theory|infinity|
| 0
|
Are there more rational numbers than integers?
|
I've been told that there are precisely the same number of rationals as there are of integers. The set of rationals is countably infinite, therefore every rational can be associated with a positive integer, therefore there are the same number of rationals as integers. I've ignored sign-related issues, but these are easily handled. To count the rationals, consider sets of rationals where the denominator and numerator are positive and sum to some constant. If the constant is 2 there's 1/1. If the constant is 3, there's 1/2 and 2/1. If the constant is 4 there's 1/3, 2/2 and 3/1. So far we have counted out 6 rationals, and if we continue long enough, we will eventually count to any specific rational you care to mention. The trouble is, I find this very hard to accept. I have two reasons. First, this logic seems to assume that infinity is a finite number. You can count to and number any rational, but you cannot number all rationals. You can't even count all positive integers. Infinity is co
|
You may not be very satisfied with this answer, but I'll try to explain anyway. Countability. We're not really talking about whether you can "count all of the rationals", using some finite process. Obviously, if there is an infinite number of elements, you cannot count them in a finite amount of time using any reasonable process. The question is whether there is the same number of rationals as there are positive integers; this is what it means for a set to be "countable" --- for there to exist a one-to-one mapping from the positive integers to the set in question. You have described such a mapping, and therefore the rationals are "countable". (You may disagree with the terminology, but this does not affect whether the concept that it labels is coherent.) Alternative mappings. You seem to be dissatisfied with the fact that, unlike the case of a finite set, you can define an injection from the natural numbers to the rationals which is not surjective --- that you can in fact define a more
|
|elementary-set-theory|infinity|
| 0
|
How do I calculate expected value of partial normal distribution?
|
Suppose you have a normal distribution with mean=0, and stdev=1. So the expected value is 0. Now suppose you limit the outcomes, such that no values can be below 0. So 50% of values now equal 0, and rest of distribution is still normal. Running 1000000 trials, I come out with an expected value of .4 My question is how can I get this expected value through calculation? Thanks
|
The normal distribution has density function $f(x)=\frac{e^{-\frac{x^2}{2}}}{\sqrt{2\pi}}$; your new distribution has that density function on the positive reals, $P(0)=\frac{1}{2}$, and $P(x)=0$ for the negative reals. The expected value is $0\cdot\frac{1}{2}+\int_{0}^{\infty}x\cdot f(x)dx=\frac{1}{\sqrt{2\pi}}\approx0.398942$. edit : If you were to cut off at $x=c$ (assigning all the probability from below c to c itself) instead of $x=0$, your density function would be $f(x)=\frac{e^{-\frac{x^2}{2}}}{\sqrt{2\pi}}$ for $x>c$, $P(c)=\int_{-\infty}^{c}\frac{e^{-\frac{x^2}{2}}}{\sqrt{2\pi}}dx$, and $P(x)=0$ for $x edit 2 : note that the exponent on e in all of the above is $-\frac{x^2}{2}$ (the exponent 2 on the x is, in the current TeX rendering, positioned and sized such as to be somewhat ambiguous) edit 3 : my explanation incorrectly mixed probability density functions and literal probabilities--this was solely an issue of terminology and the analytic results still stand, but I have a
|
|calculus|statistics|probability-theory|
| 1
|
Are there any interesting semigroups that aren't monoids?
|
Are there any interesting and natural examples of semigroups that are not monoids (that is, they don't have an identity element)? To be a bit more precise, I guess I should ask if there are any interesting examples of semigroups $(X, \ast)$ for which there is not a monoid $(X, \ast, e)$ where $e$ is in $X$ . I don't consider an example like the set of real numbers greater than $10$ (considered under addition) to be a sufficiently 'natural' semigroup for my purposes; if the domain can be extended in an obvious way to include an identity element then that's not what I'm after.
|
One source of monoids is given by taking rings with identity, and forgetting about addition. So similarly, one source of semigroups that are not monoids is taking rings without identity, and forgetting about addition. With this in mind, let me explain one basic source of rings without identity. A basic source of rings is given by taking functions satisfying some reasonable condition on a space, e.g. continuous real or complex valued functions on a space, with pointwise addition and multiplication. Of course, the constant function 1 is continuous, and so this gives a ring with identity. But suppose now that we impose some condition, such as "all functions that are continuous, and which furthermore vanish at some specified point". This throws out the constant function 1, and so gives a ring without identity. Now you could naturally object that this is artificial (as per the requirement in the question that there not be an obvious extension to a monoid), so let me add more explanation as
|
|big-list|abstract-algebra|semigroups|monoid|
| 0
|
What is the form of curvature that is invariant under rotations and uniform scaling
|
This is a followup to this question , where I learned that curvature is invariant to rotations. I have learned of a version of curvature that is invariant under affine transformations . I am wondering if there a is a form of curvature between the two. Invariant under uniform scaling and rotation but not all affine transformations?
|
I don't know if this would suit you, but one thing you can consider (much more naive than the notion of affine curvature) is to fix a point P_0 on your curve, and then consider the function on the curve given by sending a point P to the quantity curvature(P)/curvature(P_0) . This is a kind of relative curvature, where you measure how much everything is curving in comparison to the curvature at P_0, and is invariant under scaling and rotation.
|
|geometry|differential-geometry|
| 1
|
Is there a possibility to choose fairly from three items when every choice can only have 2 options
|
Me and my wife are often not knowing which DVD to watch. If we have two options we have a simple solution, I put one DVD in one hand behind my back and the other DVD in the other hand. She will randomly choose a hand and the DVD I have in that hand will be the one to watch. This procedure is easy to expand to any power of 2. If we have 4 DVD's I hold 2 in one hand, 2 in the other. When a pair of DVD's is chosen, I split them out to two hands and she choses again. The question is, what can we do when we have 3 DVD's. The assumptions we make are: I am not impartial. If I can influence the result somehow I will try to do that My wife really choses randomly a side every-time, independent of what she chose earlier. I don't have any other place to hide the DVD's, so every DVD is either visible or in one of the two hands. As requirement we have that it must be a procedure with a predetermined number of steps. Not more, not less. If this is not possible, a solution that guarantees to finish wi
|
There is no such procedure which has an upperbound on number of steps. Here is proof. Let there is such procedure with no more then $N$ steps. On each step you basicly generate random integer between $1$ and $2$. Consider all possible sequences of no more than $N$ generated numbers. Every such sequence has probability of form $\frac{x}{2^N}$. Consider sequences for which the procedure says "1" (the result is equal to $1$). The sum of theirs probabilities is $\frac{x_1}{2_N}$. It also must be equal to $\frac{1}{3}$. But $\frac{x_1}{2_N}$ can't be equal to $\frac{1}{3}$.
|
|probability-theory|algorithms|
| 0
|
Dot product in coordinates
|
Dot product of two vectors on plane could be defined as product of lengths of those vectors and cosine of angle between them. In cartesian coordinates dot product of vectors with coordinates $(x_1, y_1)$ and $(x_2, y_2)$ is equal to $x_1x_2 + y_1y_2$. How to prove it?
|
The dot product is invariant under rotations, we may therefore rotate our coordinate system so that v is along the x-axis. In this case, $v = (|v|, 0)$. Letting $w = (x,y)$ we have (using the definition of dot product in Cartesian coordinates) $v \cdot w = |v| x$. But what is $x$? Well, if you draw the picture and let $\theta$ be the angle between v and w, then we see that $\cos \theta = x/|w||$ so that $x = |w| \cos \theta$. Thus $v\cdot w = |v||w| \cos \theta$.
|
|geometry|linear-algebra|
| 0
|
How can I understand and prove the "sum and difference formulas" in trigonometry?
|
The " sum and difference " formulas often come in handy, but it's not immediately obvious that they would be true. \begin{align} \sin(\alpha \pm \beta) &= \sin \alpha \cos \beta \pm \cos \alpha \sin \beta \\ \cos(\alpha \pm \beta) &= \cos \alpha \cos \beta \mp \sin \alpha \sin \beta \end{align} So what I want to know is, How can I prove that these formulas are correct? More importantly, how can I understand these formulas intuitively? Ideally, I'm looking for answers that make no reference to Calculus, or to Euler's formula , although such answers are still encouraged, for completeness.
|
Here are my favorite diagrams: As given, the diagrams put certain restrictions on the angles involved: neither angle, nor their sum, can be larger than 90 degrees; and neither angle, nor their difference, can be negative. The diagrams can be adjusted, however, to push beyond these limits. (See, for instance, this answer .) Here's a bonus mnemonic cheer (which probably isn't as exciting to read as to hear): Sine, Cosine, Sign , Cosine, Sine! Cosine, Cosine, Co-Sign , Sine, Sine! The first line encapsulates the sine formulas; the second, cosine. Just drop the angles in (in order $\alpha$ , $\beta$ , $\alpha$ , $\beta$ in each line), and know that "Sign" means to use the same sign as in the compound argument ("+" for angle sum, "-" for angle difference), while "Co-Sign" means to use the opposite sign .
|
|intuition|trigonometry|
| 0
|
In a graph, is it always possible to construct a set of cycle basis, with each and every edge Is shared by at most 2 cycle bases?
|
Let's say we have a graph, with a list of edges and vertexes $(E,V)$, all the vertexes are connected to at least one edge at one end. There are many ways a complete set of cycle basis can be found out from it. Now the issue is, is it always possible to find a complete set of cycle basis that each edge is shared by at most $2$ cycles? Edit: There is a mathematical argument proving why it is not possible. But admittedly such a highly abstract reasoning is a bit hard for me to grasp. I would appreciate if someone can provide a graphical example of such a graph.
|
The question was answered in the negative on Math Overflow. See https://mathoverflow.net/questions/30759/in-a-graph-is-it-always-possible-to-construct-a-set-of-cycle-bases-with-each-an Edit: We get a counterexample from any non-planar graph where every edge is part of at least one cycle. Here's a reference: P. V. O'Neil, Proc. AMS, 37 (2), Feb. 1973, 617-8 I'll repeat the argument here. Take a nonplanar graph with every edge in at least one cycle. If that had a cycle basis like the one you want, then we could generate one for $K_{3,3}$ or $K_{5}$. Suppose we had a basis for $K_{3,3}$. There are 4 cycles in that basis. (A cycle basis has m-n+1 elements, where m is the number of vertices and n is the number of edges.) Also, take the binary sum of the four cycles. The five cycles include every edge exactly twice, so there are a total of 2 $\cdot$ (number of edges) = 18 edges in those five cycles. But each cycle has at least 4 edges, so the five cycles must have at least 20 edges. Contradi
|
|graph-theory|
| 1
|
How to explain fractals to a layperson and to someone with more math training?
|
I have a Ph.D. in computational and theoretical chemistry with advanced but field-oriented knowledge of mathematics. I am fascinated by fractals, but I am unable to understand them from the formal point of view. To my level of understanding, they look like a graphical rendering of an ill-conditioned iterative problem, where small variations of the initial condition lead to huge changes in the final result, but that's just what I got out of it with my current knowledge. How would you explain fractals (such as the Mandelbrot set) to a layperson with basic mathematics knowledge from high school, and how would you instead explain it to someone which has more math training, but not formal. This question is collateral to a post on the Mandelbrot set I did on my blog some time ago. If you have any comments on what I was doing with my tinkering of the parameters (to get some keywords for further exploration), it's greatly appreciated. I would like to explain it better to my readers, but I am u
|
For a "high-level" explanation, I would say this: Fractals are surprisingly complex patterns that result from the repeated application of relatively simple operations/rules. One of the easiest to visualize examples is the Koch snowflake , constructed by adding smaller triangles to each face of the figure at each iteration: A more real-world example is the fern leaf . The DNA in a single plant cell encodes enough information to describe the structure of an entire leaf (and the entire plant, for that matter) without explicitly describing the location of each cell . Instead, the cells grow according to a set of simple rules that result in the self-similar appearance of the fern, even at smaller and smaller levels: For a more complex mathematical explanation that still remains tied to the real world, have a look at the basic Ricker model of population growth and the resulting bifurcation diagrams : (source: phaser.com ) The x-axis on this graph is population growth rate and the y-axis is p
|
|fractals|
| 1
|
Surprising Generalizations
|
I just learned (thanks to Harry Gindi's answer on MO and to Qiaochu Yuan's blog post on AoPS ) that the chinese remainder theorem and Lagrange interpolation are really just two instances of the same thing. Similarly the method of partial fractions can be applied to rationals rather than polynomials. I find that seeing a method applied in different contexts, or just learning a connection that wasn't apparent helps me appreciate a deeper understanding of both. So I ask, can you help me find more examples of this? Especially ones which you personally found inspiring.
|
Localization When I learned that you could localize categories(and not just abelian!) I was floored. The general idea that we take a class of morphisms in a category and send them functorially to another category where they are isos is awesome. It is also very important in my work, which is generalizing some ideas of Algebraic Geometry to a more categorical setting. Here is a link!
|
|soft-question|big-list|intuition|
| 0
|
Surprising Generalizations
|
I just learned (thanks to Harry Gindi's answer on MO and to Qiaochu Yuan's blog post on AoPS ) that the chinese remainder theorem and Lagrange interpolation are really just two instances of the same thing. Similarly the method of partial fractions can be applied to rationals rather than polynomials. I find that seeing a method applied in different contexts, or just learning a connection that wasn't apparent helps me appreciate a deeper understanding of both. So I ask, can you help me find more examples of this? Especially ones which you personally found inspiring.
|
Galois Connections Let's be honest, the correspondence between Galois groups and field extension is pretty hott. The first time I saw this I was duly impressed. However, about two years ago, I learned about universal covering spaces. Wow! I swear my understanding of covering spaces doubled when the prof told me that this was a "Galois correspondence for fundamental groups and covering spaces". Again here is a link!
|
|soft-question|big-list|intuition|
| 0
|
Surprising Generalizations
|
I just learned (thanks to Harry Gindi's answer on MO and to Qiaochu Yuan's blog post on AoPS ) that the chinese remainder theorem and Lagrange interpolation are really just two instances of the same thing. Similarly the method of partial fractions can be applied to rationals rather than polynomials. I find that seeing a method applied in different contexts, or just learning a connection that wasn't apparent helps me appreciate a deeper understanding of both. So I ask, can you help me find more examples of this? Especially ones which you personally found inspiring.
|
Model categories as a framework for both complexes of R-modules and topological spaces (making precise, for example, analogy between taking projective resolution and replacing a space with weakly homotopy equivalent CW-complex).
|
|soft-question|big-list|intuition|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.