title
string
question_body
string
answer_body
string
tags
string
accepted
int64
A challenge by R. P. Feynman: give counter-intuitive theorems that can be translated into everyday language
The following is a quote from Surely you're joking, Mr. Feynman . The question is: are there any interesting theorems that you think would be a good example to tell Richard Feynman, as an answer to his challenge? Theorems should be totally counter-intuitive, and be easily translatable to everyday language. (Apparently the Banach-Tarski paradox was not a good example.) Then I got an idea. I challenged them: "I bet there isn't a single theorem that you can tell me - what the assumptions are and what the theorem is in terms I can understand - where I can't tell you right away whether it's true or false." It often went like this: They would explain to me, "You've got an orange, OK? Now you cut the orange into a finite number of pieces, put it back together, and it's as big as the sun. True or false?" "No holes." "Impossible! "Ha! Everybody gather around! It's So-and-so's theorem of immeasurable measure!" Just when they think they've got me, I remind them, "But you said an orange! You can't
My favorite would probably be Goodstein's theorem: Start with your favorite number (mine is $37$) and express it in hereditary base $2$ notation. That is, write it as a power of $2$ with exponents powers of $2$, etc. So, $37 = 2^{(2^2 + 1)} + 2^2 + 1$. This is the first element of the sequence. Next, change all the $2$'s to $3$'s, and subtract one from what's remaining and express in hereditary base $3$ notation. We get $3^{(3^3 + 1)} + 3^3 + 1 - 1= 3^{(3^3 + 1)} + 3^3$ (which is roughly $2 \times 10^{13}$). This is the second element of the sequence. Next, change all $3$'s to $4$'s, subtract one, and express in hereditary base $4$ notation. We get $4^{(4^4 + 1)} + 4^4 - 1 = 4^{(4^4 + 1)} + 3*4^3 + 3*4^2 + 3*4 + 3$ (which is roughly $5 \times 10^{154}$) . This is the third element of the sequence. Rinse, repeat: at the $n^{th}$ stage, change all the "$n+1$" to "$n+2$", subtract $1$, and reexpress in hereditary base $n+2$ notation. The theorem is: no matter which number you start with,
|soft-question|big-list|examples-counterexamples|
0
How many knight's tours are there?
The knight's tour is a sequence of 64 squares on a chess board, where each square is visted once, and each subsequent square can be reached from the previous by a knight's move. Tours can be cyclic, if the last square is a knight's move away from the first, and acyclic otherwise. There are several symmetries among knight's tours. Both acyclic and cyclic tours have eight reflectional symmetries, and cyclic tours additionally have symmetries arising from starting at any square in the cycle, and from running the sequence backwards. Is it known how many knight's tours there are, up to all the symmetries?
I was recently surprised to discover that it's actually not known (edit: see below). The number of closed knight's tours (cyclic) was computed in the 1990s, using binary decision diagrams . There are 26,534,728,821,064 closed directed knight's tours, and the number of undirected ones is half that or 13,267,364,410,532. If you count equivalence classes under rotation and reflection, there are slightly more than 1/8th of that: 1,658,420,855,433. (Loebbing and Wegener (1996) wrote a paper "The Number of Knight's Tours Equals 33,439,123,484,294 — Counting with Binary Decision Diagrams" ; the number in the title in the mistake, as they pointed out in a comment to their paper. Brendon McKay independently computed the correct number with another method, and the original authors seem to have later found the same answer.) Finding the exact number of open tours (not cyclic/reentrant) was open, but was estimated to be about 10 15 or 2×10 16 . Edit : Please see and upvote the answer by user ranton
|combinatorics|graph-theory|hamiltonian-path|knight-tours|
1
What is a real number (also rational, decimal, integer, natural, cardinal, ordinal...)?
In mathematics, there seem to be a lot of different types of numbers. What exactly are: Real numbers Integers Rational numbers Decimals Complex numbers Natural numbers Cardinals Ordinals And as workmad3 points out, some more advanced types of numbers (I'd never heard of) Hyper-reals Quaternions Imaginary numbers Are there any other types of classifications of a number I missed?
I think you were being a little too hard on Isaac. The truth is that the real numbers are a sophisticated mathematical construction and that any explanation of what they "are" which pretends otherwise is a convenient fiction. Mathematicians need these kind of sophisticated constructions because they are what is required for rigorous proofs. Before people explicitly constructed the real numbers and used them to define and prove things about other concepts, it was never totally clear what was true or what was false, and everybody was very confused. For example, Cantor proved that the number of points in the plane is the same as the number of points on a line. Many people thought that this was impossible before he did it; they had an intuition that you couldn't possibly "fit" the plane into the line. More generally, people were pretty sure you couldn't fit $\mathbb{R}^n$ into $\mathbb{R}^m$ if $n$ was greater than $m$. It wasn't until quite a bit later that mathematicians formalized and p
|terminology|definition|number-systems|
0
Correct usage of the phrase "In the sequel"? History? Alternatives?
While I feel quite confident that I've inferred the correct meaning of "In the sequel" from context, I've never heard anyone explicitly tell me, so first off, to remove my niggling doubts: What does this phrase mean? (Someone recently argued to me that "sequel" was actually supposed to refer to a forthcoming second part of a paper, which I found highly unlikely, but I'd just like to make sure. ) My main questions: At what points in the text, and for what kinds of X, is it appropriate to use the phrase "In the sequel, X" in a paper? In a book? Is it ever acceptable to introduce definitions via "In the sequel, we introduce the concept of a "blah", which is a thing satisfying ..." at the start of a paper or book without a formal "Definition. A "blah" is a thing, satsifying ..." in the main text of the paper or book? Finally, out of curiosity, I'm wondering how long this phrase has been around, if it's considered out of date or if it's still a popular phrase, and what some good alternative
Your interpretation of how "in the sequel" is used in mathematical literature is correct: it means "in what follows", "in the remainder of the present text", "from now on"...As you can see, there are many other such expressions, and I don't think that "in the sequel" has any particular nuance of meaning that these other phrasings lack. On to your question of whether this is acceptable usage in a math paper: acceptable, yes, but not completely recommended. In more detail: Ths phrase is, as you point out, rather common, so any experienced reader of math will have seen it before. Also its meaning should be relatively transparent to a literate native speaker of English: this is after all what the dictionary says that "sequel" means. However, there are two drawbacks. First, as Mariano says, nowadays we hear "sequel" used most commonly for the next movie in a franchise, so its use in a math paper will inevitably make some readers think that you are referring to [TITLE OF YOUR PAPER] Part II:
|soft-question|terminology|
1
For any prime $p > 3$, why is $p^2-1$ always divisible by $24$?
Let $p>3$ be a prime. Prove that $24 \mid p^2-1$ . I know this is very basic and old hat to many, but I love this question and I am interested in seeing whether there are any proofs beyond the two I already know.
This is somewhere between an answer and commentary. As others have said, the question is equivalent to showing: for any prime $p > 3$ , $p^2 \equiv 1 \pmod 3$ and $p^2 \equiv 1 \pmod 8$ . Both of these statements are straightforward to show by just looking at the $\varphi(3) = 2$ reduced residue classes modulo $3$ and the $\varphi(8) = 4$ reduced residue classes modulo $8$ . But what is their significance? For a positive integer $n$ , let $U(n) = (\mathbb{Z}/n\mathbb{Z})^{\times}$ be the multiplicative group of units ("reduced residues") modulo $n$ . Like any abelian group $G$ , we have a squaring map $[2]: G \rightarrow G$ , $g \mapsto g^2$ , the image of which is the set of squares in $G$ . So, the question is equivalent to: for $n = 3$ and also $n = 8$ , the subgroup of squares in $U(n)$ is the trivial group. The group $U(3) = \{ \pm 1\}$ has order $2$ ; since $(-1)^2 = 1$ , the fact that the subgroup of squares is equal to $1$ is pretty clear. But more generally, for any odd prime
|elementary-number-theory|prime-numbers|divisibility|
0
Calculating an Angle from $2$ points in space
Given two points $p_1$ , $p_2$ around the origin $(0,0)$ in $2D$ space, how would you calculate the angle from $p_1$ to $p_2$ ? How would this change in $3D$ space?
In case you want to implement it on some programming language. Usually there is a function called like $\operatorname{atan2}(y, x)$ which returns oriented angle between points $(x, y)$ and $(1, 0)$ . In that case use could use $$\operatorname{atan2}(\text{vector product}, \text{scalar product}). $$ This is usually more stable than using just $\arccos$ or $\arcsin$ .
|linear-algebra|geometry|
0
Applications of class number
There is the notion of class number from algebraic number theory. Why is such a notion defined and what good comes out of it? It is nice if it is $1$; we have unique factorization of all ideals; but otherwise?
As others have said, often what you want for a particular Diophantine application is that the class number of a certain number field be relatively prime to a certain number. The famous example of this (as already noted by others) is Kummer's Theorem that for an odd prime $p$ , the Fermat equation $x^p + y^p = z^p$ has no integer solutions with $xyz \neq 0$ if the ring of integers of $\mathbb{Q}[e^{\frac{2 \pi i}{p}}]$ has class number prime to $p$ . Another -- simpler -- nice example is the Mordell equation $y^2 + k = x^3$ . If $k \equiv 1,2 \pmod 4$ and the ring $\mathbb{Z}[\sqrt{-k}]$ has class number prime to $3$ , then all of the integer solutions to the Mordell equation can be found. See Section 4 of http://alpha.math.uga.edu/~pete/4400MordellEquation.pdf for an exposition of this which is (I hope) reasonably elementary and accessible to undergraduates.
|algebraic-number-theory|
1
How are we able to calculate specific numbers in the Fibonacci Sequence?
I was reading up on the Fibonacci Sequence, $1,1,2,3,5,8,13,\ldots $ when I noticed some were able to calculate specific numbers. So far I've only figured out creating an array and counting to the value, which is incredibly simple, but I reckon I can't find any formula for calculating a Fibonacci number based on it's position. Is there a way to do this? If so, how are we able to apply these formulas to arrays?
To expand on falagar's answer, my favourite proof of Binet's formula: ...Which I was going to post a summary of here, but remembered that everything was awful without Tex, so here is a link to some notes on it I found on google . The basic idea is to treat pairs of fibonnacci numbers, adjacent in the sequence, as vectors. Moving on to the next adjacent pair induces a linear transformation not unlike that of the matrix falagar posted. Calculating eigenvalues and eigenvectors can give a complete prediction of where an initial vector will find itself, predicting the whole sequence. It's quite a lot of work but I think it's rather illuminating.
|combinatorics|generating-functions|fibonacci-numbers|
0
Fermat's Two Square Theorem: How do you prove that an odd prime is the sum of two squares iff it is congruent to 1 mod 4?
It is a theorem in elementary number theory that if $p$ is a prime and congruent to 1 mod 4, then it is the sum of two squares. Apparently there is a trick involving arithmetic in the gaussian integers that lets you prove this quickly. Can anyone explain it?
Here is another proof without complex numbers. We start with proving that there exists $z \in \mathbb{N}$ such that $z^2 + 1 \equiv 0 \pmod p$. We do this in the same way as Akhil Mathew. Let we have $a^2 + b^2 = pm$. Take $x$ and $y$ such that $x \equiv a \pmod m$ and $y \equiv b \pmod m$ and $x, y \in [-m/2, m/2)$. Consider $u = ax + by$ and $v = ay - bx$. Then $u^2 + v^2 = (a^2 + b^2)(x^2 + y^2)$. Moreover, $u$ and $v$ are multiples of $m$. Hence $(u/m)^2 + (v/m)^2 = p (x^2 + y^2)/m$. $(x^2 + y^2)/m$ is an integer because of the definition of $x$ and $y$ and that $a^2 + b^2 = pm$. Also $(x^2 + y^2)/m$ is less than $m/2$. Now we change $a$ by $u$ and $b$ by $v$ and continue this process until we get $m=1$. Notice that this is quite efficient way to find representation of $p$ as a sum of two squares - it takes $O(\log p)$ steps to find it provided we have found $z$ such that $z^2 + 1$ is multiple of $p$.
|elementary-number-theory|prime-numbers|
0
Looking for functions $f$ with $\int_{-\infty}^{\infty}f(x)\,dx = 1$.
I am looking for functions and/or constants that when being integrated from minus infinity to infinity produce 1. I think the Dirac delta function is one example but perhaps there are some more? References on useful material is also greatly appreciated.
Any integrable functions that gives a finite nonzero answer can be modified to suit your need. Suppose $\int f(x)dx=A$, then let $g(x)=f(x)/A$, automatically we have $\int g(x) dx=A/A=1$. (Actually, all continuous probability distribution function must have this property.)
|big-list|calculus|analysis|
1
Counting primes
Let $\pi(x)$ be the number of primes not greater than $x$. Wikipedia article says that $\pi(10^{23}) = 1,925,320,391,606,803,968,923$. The question is how to calculate $\pi(x)$ for large $x$ in a reasonable time? What algorithms do exist for that?
You can use inclusion exclusion principle to get a boost over the Eratosthenes sieve
|number-theory|algorithms|prime-numbers|
0
Approximation symbol: Is $\pi \approx 3.14\dots$ equivalent to $\pi \fallingdotseq 3.14\dots$?
This could be a trivial question, but what is exactly the difference of between these two expressions? Am I correct to state the both interchangeably whenever I need to express the approximation of $\pi$? I'm bit confused as here , it states $\pi$ can be express by $\fallingdotseq$ as it's not a rational number, but $\pi$ can also be expressed by a series (asymptotic), so it should be $\approx$ as well. $$\pi \approx 3.14\dots$$ $$\pi \fallingdotseq 3.14\dots$$
Any mathematical notation is ok as long as it is common knowledge in your community. For instance, I believe I fully understand the meaning of the $\approx$ symbol. However, I haven't ever seen the second symbol you provided. To be on the sure side you should provide a definition of any relation symbol you don't consider to be common knowledge. This may happen as a short remark ("..., where $\approx$ denotes ...") or maybe as a table of the used symbols in the front matter of your work. As with any definition in mathematics, there is no right or wrong in the symbol/notion/etc. you use, only proper or unsound definitions. Also: When in doubt, use the symbol that is used more commonly in the standard textbooks of your field. There is no benefit in being avant-garde at notation.
|definition|approximation|
1
Meaning of closed points of a scheme
This is a question in Liu's book. Let $X$ be a quasi-compact scheme. Show that $X$ contains a closed point. Well I'm unable to do this question, so any help would be appreciated. This question also makes me curious to know about the meaning/use of closed points of a scheme in general - by that I mean a scheme which is not an algebraic variety/local scheme over a field, which has a geometric meaning. Thanks!
Zorn's lemma implies there is a minimal nonempty closed set $F \subset X$ with no proper closed subsets (because the closed sets have the finite intersection property in view of quasi-compactness). It is sufficient to find a closed point in $F$. Now $F$ is in itself a scheme, and it has an open subset $U=\mathrm{Spec}\, A$ for $A$ a ring. This must be all of $F$ by minimality. A maximal ideal in $A$ gives a closed point in $F$, hence in $X$. There are people here who could give a much better answer to your other question, so I'll leave it.
|intuition|algebraic-geometry|
0
Looking for functions $f$ with $\int_{-\infty}^{\infty}f(x)\,dx = 1$.
I am looking for functions and/or constants that when being integrated from minus infinity to infinity produce 1. I think the Dirac delta function is one example but perhaps there are some more? References on useful material is also greatly appreciated.
One good example is the standard Gaussian distribution , $\phi(x) = \frac{1}{\sqrt{2\pi}} e^{-x^2/2}$. This is the most straightforward example of a continuous probability distribution function as mentioned by KennyTM above.
|big-list|calculus|analysis|
0
Counting primes
Let $\pi(x)$ be the number of primes not greater than $x$. Wikipedia article says that $\pi(10^{23}) = 1,925,320,391,606,803,968,923$. The question is how to calculate $\pi(x)$ for large $x$ in a reasonable time? What algorithms do exist for that?
The most efficient prime counting algorithms currently known are all essentially optimizations of the method developed by Meissel in 1870, e.g. see the discussion here http://primes.utm.edu/howmany.shtml
|number-theory|algorithms|prime-numbers|
1
Why are Hopf algebras called quantum groups?
Why are noncommutative nonassociative Hopf algebras called quantum groups? This seems to be a purely mathematical notion and there is no quantum anywhere in it prima facie.
One way that Hopf algebras come up is as the algebra of (real or complex) functions on a topological group. The multiplication is commutative since it is just pointwise multiplication of functions. However, in non-commutative geometry you want to replace the algebra of functions on a space with a non-commutative algebra, giving a non-commutative Hopf algebra. This relates to quantum mechanics because there the analog of the classical coordinate functions of position and momentum do not commute. Therefore we think of the algebra of functions on a quantum "space" as being non-commutative.
|terminology|noncommutative-geometry|quantum-groups|
1
Probability to find connected pixels
Say I have an image, with pixels that can be either $0$ or $1$. For simplicity, assume it's a $2D$ image (though I'd be interested in a $3D$ solution as well). A pixel has $8$ neighbors (if that's too complicated, we can drop to $4$-connectedness). Two neighboring pixels with value $1$ are considered to be connected. If I know the probability $p$ that an individual pixel is $1$, and if I can assume that all pixels are independent, how many groups of at least $k$ connected pixels should I expect to find in an image of size $n\times n$? What I really need is a good way of calculating the probability of $k$ pixels being connected given the individual pixel probabilities. I have started to write down a tree to cover all the possibilities up to $k=3$, but even then, it becomes really ugly really fast. Is there a more clever way to go about this?
This looks a bit like percolation theory to me. In the 4-neighbour case, if you look at the dual of the image, the chance that an edge is connected (runs between two pixels of the same colour) is 1-2p+2p^2 . I don't think you can get nice closed-form answer for your question, but maybe a computer can help with some Monte Carlo simulation?
|probability|graph-theory|
1
Looking for functions $f$ with $\int_{-\infty}^{\infty}f(x)\,dx = 1$.
I am looking for functions and/or constants that when being integrated from minus infinity to infinity produce 1. I think the Dirac delta function is one example but perhaps there are some more? References on useful material is also greatly appreciated.
The function which is 1 on the interval [0;1] , and 0 elsewhere, is a non-continuous probability distribution function. The function which is 3 on [0;1] and -1 on (1;3], and so on and on. What kind of answer do you want? What kind of properties do you want your functions to have? There really are too many functions to list, since multiplying any function by a C^oo function with compact support and then applying Kenny's trick gives you an answer.
|big-list|calculus|analysis|
0
Why are Hopf algebras called quantum groups?
Why are noncommutative nonassociative Hopf algebras called quantum groups? This seems to be a purely mathematical notion and there is no quantum anywhere in it prima facie.
I cannot comment, and this should be a comment... Observe that the question in your title and the question in the body of your question are quite different! A non-commutative non-cocommutative Hopf algebra is not the same thing as a non-commutative group, and quantum groups are usually associative.
|terminology|noncommutative-geometry|quantum-groups|
0
Looking for functions $f$ with $\int_{-\infty}^{\infty}f(x)\,dx = 1$.
I am looking for functions and/or constants that when being integrated from minus infinity to infinity produce 1. I think the Dirac delta function is one example but perhaps there are some more? References on useful material is also greatly appreciated.
If you take any odd function $F$ differentiable on $\mathbb{R}$ and such that $F(x)\to l$ (with $l$ a nonzero real) for $x\to \infty$, then $f(x) = \frac{1}{2l}F'(x)$ statisfies your request. For example $F(x) = \operatorname{arctan}(x)$
|big-list|calculus|analysis|
0
Looking for functions $f$ with $\int_{-\infty}^{\infty}f(x)\,dx = 1$.
I am looking for functions and/or constants that when being integrated from minus infinity to infinity produce 1. I think the Dirac delta function is one example but perhaps there are some more? References on useful material is also greatly appreciated.
Most practical mother wavelet have square norm 1. $\int_{-\infty}^\infty |\psi(t)|^2 dt = 1$
|big-list|calculus|analysis|
0
Looking for functions $f$ with $\int_{-\infty}^{\infty}f(x)\,dx = 1$.
I am looking for functions and/or constants that when being integrated from minus infinity to infinity produce 1. I think the Dirac delta function is one example but perhaps there are some more? References on useful material is also greatly appreciated.
Any function $f(x)$ which integrates to $1$ over any range $[a,b]$ fits this bill, since we can define $g(x)=f(x)$ on $[a,b]$, and $0$ everywhere else. Even if you only want continuous functions, restricting ourselves above to $f(x)$ where $f(a)=f(b)=0$ still satisfies this. If you want continuous functions strictly $>0$ everywhere, these are known as probability distributions (continuous on $[-\infty,\infty]$). A large list of such functions can be found here . A few more notable examples are: The normal distribution The skew-normal distribution The t-distribution The cauchy distribution The extreme-value distribution
|big-list|calculus|analysis|
0
Computation with a memory wiped computer
Here is another result from Scott Aaronson's blog : If every second or so your computer’s memory were wiped completely clean, except for the input data; the clock; a static, unchanging program; and a counter that could only be set to 1, 2, 3, 4, or 5, it would still be possible (given enough time) to carry out an arbitrarily long computation — just as if the memory weren’t being wiped clean each second. This is almost certainly not true if the counter could only be set to 1, 2, 3, or 4. The reason 5 is special here is pretty much the same reason it’s special in Galois’ proof of the unsolvability of the quintic equation. Does anyone have idea of how to show this?
As Scott himself states in comments section of the post in question (comment #9): (4) Width-5 branching programs can compute NC1 (Barrington 1986); corollary pointed out by Ogihara 1994 that width-5 bottleneck Turing machines can compute PSPACE Unfortunately, I don't have any ideas how this is proved.
|computer-science|
0
Learning Lambda Calculus
What are some good online/free resources (tutorials, guides, exercises, and the like) for learning Lambda Calculus? Specifically, I am interested in the following areas: Untyped lambda calculus Simply-typed lambda calculus Other typed lambda calculi Church's Theory of Types (I'm not sure where this fits in). (As I understand, this should provide a solid basis for the understanding of type theory.) Any advice and suggestions would be appreciated.
It might be nice to work through Structure and Interpretation of Computer Programs , which is available online for free. This book is an introduction to computer science and the programming language Scheme, which is a flavor of the programming language Lisp, which is based on the lambda calculus. Although it is not strictly a book about the lambda calculus, it might be fun or useful to gain some hands-on and "practical" experience with the lambda calculus by reading some of this book and working through some of its exercises.
|logic|learning|online-resources|lambda-calculus|type-theory|
0
What is the single most influential book every mathematician should read?
If you could go back in time and tell yourself to read a specific book at the beginning of your career as a mathematician, which book would it be?
I would read a book about Perelman's proof of the Poincaré conjecture (or even the papers themselves). Oh, you mean the book had to be written when I was starting?
|soft-question|big-list|reference-request|
0
What is the single most influential book every mathematician should read?
If you could go back in time and tell yourself to read a specific book at the beginning of your career as a mathematician, which book would it be?
I am not a mathematician but Flatland: A Romance of Many Dimensions blew my mind. I read it when I was a college student in a class on Special Relativity and wish I had read it way earlier.
|soft-question|big-list|reference-request|
0
Meaning of closed points of a scheme
This is a question in Liu's book. Let $X$ be a quasi-compact scheme. Show that $X$ contains a closed point. Well I'm unable to do this question, so any help would be appreciated. This question also makes me curious to know about the meaning/use of closed points of a scheme in general - by that I mean a scheme which is not an algebraic variety/local scheme over a field, which has a geometric meaning. Thanks!
Closed points should be thought of as being "actual points", whereas non-closed points can correspond to all sorts of different things: subvarieties, "fat" or "fuzzy" points, generic points, etc. You might be interested in reading this blog post about Mumford's drawing of $\operatorname{Spec} \mathbb{Z}[x]$ . One possible way to justify the claim that closed points are the "actual points" is the fact that if we have, for instance, a smooth variety over $\mathbb{C}$ , then its analytification will be a complex manifold. The closed points of the former will then correspond exactly to the points of the latter.
|intuition|algebraic-geometry|
0
Visualising functions from complex numbers to complex numbers
I think that complex analysis is hard because graphs of even basic functions are 4 dimensional. Does anyone have any good visual representations of basic complex functions or know of any tools for generating them?
For Moebius transformations, check out this nice YouTube video .
|big-list|math-software|complex-analysis|
0
How to prove and interpret $\operatorname{rank}(AB) \leq \operatorname{min}(\operatorname{rank}(A), \operatorname{rank}(B))$?
Let $A$ and $B$ be two matrices which can be multiplied. Then $$\operatorname{rank}(AB) \leq \operatorname{min}(\operatorname{rank}(A), \operatorname{rank}(B)).$$ I proved $\operatorname{rank}(AB) \leq \operatorname{rank}(B)$ by interpreting $AB$ as a composition of linear maps, observing that $\operatorname{ker}(B) \subseteq \operatorname{ker}(AB)$ and using the kernel-image dimension formula. This also provides, in my opinion, a nice interpretation: if non-stable, under subsequent compositions the kernel can only get bigger, and the image can only get smaller, in a sort of loss of information . How do you manage $\operatorname{rank}(AB) \leq \operatorname{rank}(A)$ ? Is there a nice interpretation like the previous one?
Prove first that if $f:X\to Y$ and $g:Y\to Z$ are functions between finite sets, then $|g(f(X))| \leq \min \{ |f(X)|, |g(Y)| \}.$ Then use the same idea.
|linear-algebra|matrices|inequality|matrix-rank|
0
Meaning of closed points of a scheme
This is a question in Liu's book. Let $X$ be a quasi-compact scheme. Show that $X$ contains a closed point. Well I'm unable to do this question, so any help would be appreciated. This question also makes me curious to know about the meaning/use of closed points of a scheme in general - by that I mean a scheme which is not an algebraic variety/local scheme over a field, which has a geometric meaning. Thanks!
Kevin Lin's answer regarding the meaning of closed points is quite reasonable, especialy in the case when the scheme in question underlies a classical variety. I want to add some additional remarks and examples for thinking about more general schemes. Here are some tautological remarks: recall that a point $x$ in a scheme $X$ is called a specialization of $y$ if $x$ lies in the Zariski closure of $y$ (and $y$ is called a generalization of $x$ ). So tautologically, a closed points is one that cannot be specialized any further (just as a generic point cannot be generalized any further). What does specialization really mean: ring theoretically, it means taking the image under a homomorphism; so if $\mathfrak p$ and $\mathfrak q$ are prime ideals of a ring $A$ , then $\mathfrak q$ is a specialization of $\mathfrak p$ in $\text{Spec} A$ if and only if $\mathfrak q$ contains $\mathfrak p$ , i.e. if $A/\mathfrak p$ surjects onto $A/\mathfrak q$ . It is perhaps best to think of an example: say
|intuition|algebraic-geometry|
1
What's the difference between open and closed sets?
What's the difference between open and closed sets? Especially with relation to topology - rigorous definitions are appreciated, but just as important is the intuition!
Intuitively speaking, an open set is a set without a border: every element of the set has, in its neighborhood, other elements of the set. If, starting from a point of the open set, you move away a little, you never exit the set. A closed set is the complement of an open set (i.e. what stays "outside" from the open set). Note that some set exists, that are neither open nor closed.
|general-topology|terminology|intuition|
0
What's the difference between open and closed sets?
What's the difference between open and closed sets? Especially with relation to topology - rigorous definitions are appreciated, but just as important is the intuition!
A set X is open if for every point p in X, there exists a neighborhood (open ball) N of p such that N is a subset of X. We call the point p of the set X a limit point if every neighborhood of p has another point q which is also in X. The set X is closed if every limit point of X is a point of X. A set can be both open and closed, and such sets are occasionally termed "clopen." Trivial examples of clopen sets are the empty set (since it has no points, both the above definitions are vacuously true) and the set of all real numbers. You can in turn visualize an open set in R as an open interval on the real line, and a closed set as a closed interval on the real line.
|general-topology|terminology|intuition|
0
Visualising functions from complex numbers to complex numbers
I think that complex analysis is hard because graphs of even basic functions are 4 dimensional. Does anyone have any good visual representations of basic complex functions or know of any tools for generating them?
The graphs in the middle of the MathWorld page on Conformal Mapping show examples of the first method in Michael Lugo's answer as well as something somewhat similar to the second method in that answer.
|big-list|math-software|complex-analysis|
0
A challenge by R. P. Feynman: give counter-intuitive theorems that can be translated into everyday language
The following is a quote from Surely you're joking, Mr. Feynman . The question is: are there any interesting theorems that you think would be a good example to tell Richard Feynman, as an answer to his challenge? Theorems should be totally counter-intuitive, and be easily translatable to everyday language. (Apparently the Banach-Tarski paradox was not a good example.) Then I got an idea. I challenged them: "I bet there isn't a single theorem that you can tell me - what the assumptions are and what the theorem is in terms I can understand - where I can't tell you right away whether it's true or false." It often went like this: They would explain to me, "You've got an orange, OK? Now you cut the orange into a finite number of pieces, put it back together, and it's as big as the sun. True or false?" "No holes." "Impossible! "Ha! Everybody gather around! It's So-and-so's theorem of immeasurable measure!" Just when they think they've got me, I remind them, "But you said an orange! You can't
What is the smallest area of a parking lot in which a car (that is, a segment of) can perform a complete turn (that is, rotate 360 degrees)? (This is obviously the Kakeya Needle Problem. Fairly easy to explain, models an almost reasonable real-life scenario, and has a very surprising answer as you probably know - the lot can have as small an area as you'd like). Wikipedia entry: Kakeya Set .
|soft-question|big-list|examples-counterexamples|
0
What's the difference between open and closed sets?
What's the difference between open and closed sets? Especially with relation to topology - rigorous definitions are appreciated, but just as important is the intuition!
An open set is a set S for which, given any of its element A, you can find a ball centered in A and whose points are all in S. A closed set is a set S for which, if you have a sequence of points in S who tend to a limit point B, B is also in S. Intuitively, a closed set is a set which contains its own boundary, while an open set is a set where you are able not to leave it if you move just a little bit.
|general-topology|terminology|intuition|
0
List of interesting math podcasts?
mathfactor is one I listen to. Does anyone else have a recommendation?
Peter Rowlett has a couple mathematical podcasts. Travels in a Mathematical World produced 64 episodes, but recently stopped. He has a new podcast Math / Maths that he co-hosts with Samuel Hansen. Samuel Hansen also has a couple other podcasts: Strongly Connected Components and Permutations and Combinations . Strongly Connected Components is more mathematical and often features interviews. I've only listened to Permutations and Combinations once or twice. I believe it's more of a comedy show.
|soft-question|big-list|online-resources|
0
Choosing a text for a First Course in Topology
Which is a better textbook - Dugundji or Munkres? I'm concerned with clarity of exposition and explanation of motivation, etc.
Try Simmons, Introduction to Topology and Modern Analysis.
|general-topology|reference-request|soft-question|book-recommendation|
0
Good books and lecture notes about category theory.
What are the best books and lecture notes on category theory?
First Chapter of Jacobson's Basic Algebra -II.
|reference-request|soft-question|category-theory|big-list|book-recommendation|
0
Fourier transform for dummies
What is the Fourier transform? What does it do? Why is it useful (in math, in engineering, physics, etc)? This question is based on the question of Kevin Lin , which didn't quite fit in Mathoverflow. Answers at any level of sophistication are welcome.
I'll give an engineering answer. If you have a time series that you think is the result of a additive collection of periodic function, the Fourier transform will help you determine what the dominant frequencies are. This is the way guitar tuners work. The perform and FFT on the sound data and pick out the frequency with the greatest power (squares of the real and imaginary parts) and consider that the "note." This is called the fundamental frequency. There are many other uses, so you might want to add big list as a tag.
|fourier-analysis|fourier-transform|
0
Fourier transform for dummies
What is the Fourier transform? What does it do? Why is it useful (in math, in engineering, physics, etc)? This question is based on the question of Kevin Lin , which didn't quite fit in Mathoverflow. Answers at any level of sophistication are welcome.
You could think of a Fourier series expanding a function as a sum of sines and cosines analogous to the way a Taylor series expands a function as a sum of powers. Or you could think of the Fourier series as a change of variables. A fundamental skill in engineering and physics is to pick the coordinate system that makes your problem simplest. Since the derivatives of sines and cosines are more sines and cosines, Fourier series are the right "coordinate system" for many problems involving derivatives.
|fourier-analysis|fourier-transform|
0
Resources for getting maths on to the web
An off-topic question posed at Mathoverflow by Andrew Stacey, but one which fits here: One thing that came out of Terry Tao's recent blog posts on this matter ( first post and follow up ) is that it's hard to get an overview of all the different ways of getting one's amazing mathematics onto the web. I thought it'd be useful to gather together a list of such. This meant to be a list of ways to do it, not examples of where it's already being done. Standard community wiki rules: one thing per answer and feel free to edit other's answers. Additional rules: it'd be useful to have a little more than just links. A brief description, pros and cons (be objective), platforms (does it only work on Linux, sort of thing) - things that might help someone decide which things to examine further.
http://mathurl.com is handy if you want to send someone a quick link to a mathematical expression.
|soft-question|
0
Examples of well-displayed mathematics on the internet
An off-topic question asked at Mathoverflow by Andrew Stacey; but one which fits here: I'm interested in hearing of examples of mathematical (or, at a pinch, scientific) websites with serious content where the design of the website actually makes it easy to read and absorb the material. To be absolutely clear, the mathematical content of the website should be on the website itself and not in an electronic article (so meta-sites that make it easy to find material, like MathSciNet or the arXiv, don't count). Edit: I'm extending this to non-internet material. I want examples where the design of the document/website/whatever actually helped when reading the material. As a little background, I know that LaTeX is meant to help us separate content from context and concentrate on each one in turn, but I often feel when reading an article that the author has concentrated solely on the content and left all of the context to TeX. This is most obvious with websites where there are some really well
For students in lower level university courses, there are two amazing resources. Paul's online math notes . My linear algebra book was complete garbage. I pretty much used these notes to get through the class. It explains everything very well and does not assume that you already know graduate level mathematics. Presentation wise, its very simple. A plain HTML website, however content is king. And ofcourse, Khan Academy. I think everyone knows about Khan. Last but not least, I would like to add the MIT online courses. I didn't like them that much but they did help me. Maybe its just me but I can not learn anything while staring at a computer screen.
|soft-question|online-resources|
0
Usefulness of Conic Sections
Conic sections are a frequent target for dropping when attempting to make room for other topics in advanced algebra and precalculus courses. A common argument in favor of dropping them is that typical first-year calculus doesn't use conic sections at all. Do conic sections come up in typical intro-level undergraduate courses? In typical prelim grad-level courses? If so, where?
Conic sections are basic examples in algebraic geometry, since they are (the real forms of) curves of genus zero. As such, they are also basic examples in number theory, since it is easy to determine the rational points on a conic section, and this is a good warm-up for studying more complicated Diophantine equations. In fact, curves of genus zero are the only class of variety for which an algorithm provably exists to determine the rational points! Even for the next hardest case, elliptic curves, there are no algorithms which provably always work. A great survey of these topics is Bjorn Poonen's Computing rational points on curves . Edit: There is also Franz Lemmermeyer's Conics - a poor man's elliptic curves , which explains how certain conics can be thought of as "degenerate" elliptic curves.
|algebra-precalculus|conic-sections|education|
1
Number of colorings of cube's faces
How many ways are there to color faces of a cube with N colors if two colorings are the same if it's possible to rotate the cube such that one coloring goes to another?
The number of different colorings is equal to \begin{equation*} \frac{n^6 + 3n^4 + 12n^3 + 8n^2}{24}. \end{equation*} You can get this number using Burnside lemma . The wikipedia article contains solution of your problem as well.
|combinatorics|
1
Useful examples of pathological functions
What are some particularly well-known functions that exhibit pathological behavior at or near at least one value and are particularly useful as examples? For instance, if $f'(a) = b$, then $f(a)$ exists, $f$ is continuous at $a$, $f$ is differentiable at $a$, but $f'$ need not be continuous at $a$. A function for which this is true is $f(x) = x^2 \sin(1/x)$ at $x=0$.
Three examples suffice to show why some modes of convergence don't imply other modes of convergence: pointwise convergence, Lp norm convergence, convergence in measure, etc. See the counterexamples section here .
|soft-question|big-list|calculus|real-analysis|
0
Useful examples of pathological functions
What are some particularly well-known functions that exhibit pathological behavior at or near at least one value and are particularly useful as examples? For instance, if $f'(a) = b$, then $f(a)$ exists, $f$ is continuous at $a$, $f$ is differentiable at $a$, but $f'$ need not be continuous at $a$. A function for which this is true is $f(x) = x^2 \sin(1/x)$ at $x=0$.
When I was first learning calculus, the fact that $sin(1/x)$ is continuous on the set $(0,\infty)$ gave me a headache.
|soft-question|big-list|calculus|real-analysis|
0
What is a Markov Chain?
What is an intuitive explanation of Markov chains, and how they work? Please provide at least one practical example.
Markov chains are used in Markov Chain Monte Carlo (MCMC). This computational technique is extremely common in Bayesian statistics. In Bayesian statistics, you want to compute properties of a posterior distribution. You'd like to draw independent samples from this distribution, but often this is impractical. So you construct a Markov chain that has as its limiting distribution the the distribution you want. So, for example, to get the mean of your posterior distribution you could take the mean of the states of your Markov chain. (Ergodic theory blesses this process.)
|probability-theory|stochastic-processes|terminology|markov-chains|intuition|
0
Applications of the Fibonacci sequence
The Fibonacci sequence is very well known, and is often explained with a story about how many rabbits there are after $n$ generations if they each produce a new pair every generation. Is there any other reason you would care about the Fibonacci sequence?
Here's an example of Fibonacci numbers applied to numerical integration .
|combinatorics|big-list|applications|fibonacci-numbers|
0
Applications of the Fibonacci sequence
The Fibonacci sequence is very well known, and is often explained with a story about how many rabbits there are after $n$ generations if they each produce a new pair every generation. Is there any other reason you would care about the Fibonacci sequence?
Here's a humorous application of Fibonacci numbers to breastfeeding twins . Even though the post is somewhat of a joke, it makes a serious point about what are called "almost periodic functions."
|combinatorics|big-list|applications|fibonacci-numbers|
0
Mathematical subjects you wish you learned earlier
I am learning geometric algebra, and it is incredible how much it helps me understand other branches of mathematics. I wish I had been exposed to it earlier. Additionally I feel the same way about enumerative combinatorics. What are some less popular mathematical subjects that you think should be more popular?
I wish I'd understood the importance of inequalities earlier. I wish I'd carefully gone through the classic book Inequalities by Hardy, Littlewood, and Poyla early on. Another good book is The Cauchy-Schwarz Masterclass . You can study inequalities as a subject in their own right, often without using advanced math. But they're critical techniques for advanced math.
|soft-question|learning|
0
Why are derivatives specified as d/dx?
Is the purpose of the derivative notation d/dx strictly for symbolic manipulation purposes? I remember being confused when I first saw the notation for derivatives - it looks vaguely like there's some division going on and there are some fancy 'd' characters that are added in... I recall thinking that it was a lot of characters to represent an action with respect to one variable. Of course, once you start moving the dx around it makes a little more sense as to why they exist - but is this the only reason? Any history lesson or examples where this notation is helpful or unhelpful is appreciated.
If you have access to it, the book A History of Mathematical Notations , by Florian Cajori, has a pretty detailed description of the history of notations for derivatives in its second volume.
|calculus|reference-request|notation|math-history|
0
Fourier transform for dummies
What is the Fourier transform? What does it do? Why is it useful (in math, in engineering, physics, etc)? This question is based on the question of Kevin Lin , which didn't quite fit in Mathoverflow. Answers at any level of sophistication are welcome.
A more complicated answer (yet it's going to be imprecise, because I haven't touched this in 15 years...) is the following. In a 3-dimentional space (for example) you can represent a vector v by its end point coordinates, x, y, z, in a very simple way. You choose three vectors which are of unit length and orthogonal with each other (a base), say i , j and k , and calculate the coordinates as such: x = v ∙ i y = v ∙ j z = v ∙ k In multidimentional space, the equations still hold. In a discrete infinite space, the coordinates and the base vectors become a sequence. The dot product becomes an infinite sum. In a continuous infinite space (like the space of good functions) the coordinates and the bases become functions and the dot product an infinite integral. Now, the Fourier transform is exactly this kind of operation (based on a set of base functions which are basically a set of sines and cosines). In other words, it is a different representation of the same function in relation to a par
|fourier-analysis|fourier-transform|
0
Best Algebraic Geometry text book? (other than Hartshorne)
Lifted from Mathoverflow : I think (almost) everyone agrees that Hartshorne's Algebraic Geometry is still the best. Then what might be the 2nd best? It can be a book, preprint, online lecture note, webpage, etc. One suggestion per answer please. Also, please include an explanation of why you like the book, or what makes it unique or useful.
Algebraic Geometry: A First Course by Joe Harris is a very good book that sits in that region between undergraduate treatments and the prerequisites of Hartshorne. In particular, one does not need to know much commutative algebra to get a lot out of Harris's book. Harris himself recommends reading Hartshorne after his book for the theory of schemes.
|big-list|reference-request|algebraic-geometry|
0
What's the difference between open and closed sets?
What's the difference between open and closed sets? Especially with relation to topology - rigorous definitions are appreciated, but just as important is the intuition!
I will not reiterate the very nice definitions found in the other answers, however I think that these "practical" definitions might help you as well on an intuitive level. Open sets are typically used as domains for functions, as they are more useful for analysing "continuous" properties like differentiability. Also they don't have nasty borders (hence you don't have to deal with functions which are well behaved only on one side of the edge). Closed sets are useful because, if they are limited, they are compact.
|general-topology|terminology|intuition|
0
Usefulness of Conic Sections
Conic sections are a frequent target for dropping when attempting to make room for other topics in advanced algebra and precalculus courses. A common argument in favor of dropping them is that typical first-year calculus doesn't use conic sections at all. Do conic sections come up in typical intro-level undergraduate courses? In typical prelim grad-level courses? If so, where?
The study of parabolas (with axis parallel to y-axis) is useful when you have to solve 2nd degree inequations.
|algebra-precalculus|conic-sections|education|
0
Why are derivatives specified as d/dx?
Is the purpose of the derivative notation d/dx strictly for symbolic manipulation purposes? I remember being confused when I first saw the notation for derivatives - it looks vaguely like there's some division going on and there are some fancy 'd' characters that are added in... I recall thinking that it was a lot of characters to represent an action with respect to one variable. Of course, once you start moving the dx around it makes a little more sense as to why they exist - but is this the only reason? Any history lesson or examples where this notation is helpful or unhelpful is appreciated.
This is the Leibniz notation, which is based on the ratio of "infinitesimals". $dy$ and $dx$ are, respectively, the infinitesimal increment of the dependent variable $y$ and the infinitesimal increment of the variable $x$. There are other notations: Newton notation, which puts a dot over the variable name, as in $\dot y$, and Cauchy notation, which uses the operator $D$, as in $D(\sin(x))=\cos(x)$.
|calculus|reference-request|notation|math-history|
0
Why are derivatives specified as d/dx?
Is the purpose of the derivative notation d/dx strictly for symbolic manipulation purposes? I remember being confused when I first saw the notation for derivatives - it looks vaguely like there's some division going on and there are some fancy 'd' characters that are added in... I recall thinking that it was a lot of characters to represent an action with respect to one variable. Of course, once you start moving the dx around it makes a little more sense as to why they exist - but is this the only reason? Any history lesson or examples where this notation is helpful or unhelpful is appreciated.
Because of their definition: Start with a function, calculate the difference in value between two points and divide by the size of the interval between the two. You can represent this as such: $$\frac{f\left(x_2\right)-f\left(x_1\right)}{x_2-x_1}$$ or $$\frac{\Delta f\left(x\right)}{\Delta x}$$ Where ∆, delta, is the Greek capital D and indicates an interval. Now, take the limit as $\Delta x$ goes to zero, and you have the differential. This is indicated by using a lower case $d$ instead of the $\Delta$. $$\frac{df\left(x\right)}{dx}$$ Now, if this operation is treated as an operator applied to a function, it is usually represented as $$\frac{d}{dx}f\left(x\right)$$ Note that (typically in physics), you can also use the letter $\delta$ to indicate very small intervals and in general you would use the symbol $\partial $ to represent partial differentials. They are all variations of the letter $D$.
|calculus|reference-request|notation|math-history|
1
Increasing network throughput by cutting routes
Suppose we model traffic flow between two points with a directed graph. Each route has either a constant travel time or one that linearly increases with traffic. We assume that each driver wishes to minimise their own travel time and we assume that the drivers form a Nash equilibria. Can removing a route ever decrease the average travelling time? Note that the existence of multiple Nash equilibria makes this question a bit complicated. To clarify, I am looking for a route removal that will guarantee a decrease in the average traveling time regardless of the Nash equilibria that are chosen before and after.
The form this question is usually asked is whether adding a route can increase the average traveling time, and this is known as Braess's paradox . The Wiki article gives an explicit example in which the travel time on some of the routes depends on the traffic.
|graph-theory|scoring-algorithm|
1
Usefulness of Conic Sections
Conic sections are a frequent target for dropping when attempting to make room for other topics in advanced algebra and precalculus courses. A common argument in favor of dropping them is that typical first-year calculus doesn't use conic sections at all. Do conic sections come up in typical intro-level undergraduate courses? In typical prelim grad-level courses? If so, where?
If you're a physics sort of person, conic sections clearly come up when you study how Kepler figured out what the shapes of orbits are, and some of their synthetic properties give useful shortcuts to things like proving "equal area swept out in equal time" that need not involve calculus. The other skills you typically learn while studying conic sections in analytic geometry - polar parametrization of curves, basic facts about various invariants related to triangles and conics, rotations and changing coordinate systems (so as to recognize the equation of a conic in general form as some sort of transformation of a standard done), are all extremely useful in physics. I'd say that plane analytic geometry was the single most useful math tool for me in solving physics problems until I got to fluid dynamics stuff (where that is replaced by complex analysis). Relatedly, independent of their use in physics, I think they're a great way to show the connections between analytic and synthetic think
|algebra-precalculus|conic-sections|education|
0
Different definitions of trigonometric functions
In school, we learn that sin is "opposite over hypotenuse" and cos is "adjacent over hypotenuse". Later on, we learn the power series definitions of sin and cos. How can one prove that these two definitions are equivalent?
Most of the proofs in elementary calculus textbooks use the definition of $\sin x$ via geometry to prove that the derivative of $\sin x$ is $\cos x$ (namely, the fact that $\lim_{x \to 0} \frac{ \sin x}{x} = 1$). Consequently, it follows that $\sin x$ and $\cos x$ are the two linearly independent solutions of $y'' = -y$. The power series equations are also two linearly independent solutions of this differential equation. Moreover, $\sin x$ and its derivative coincide with the derivative of the power series for $ \sin x$ at zero (no surprise, it's a Taylor series). Same for $\cos x$. By uniqueness of solutions to ordinary differential equations, this proves that $\sin x$ and $\cos x$ as defined in school are equal to their power series. (This is an expansion of Qiaochu's comment.)
|geometry|calculus|trigonometry|
1
Different definitions of trigonometric functions
In school, we learn that sin is "opposite over hypotenuse" and cos is "adjacent over hypotenuse". Later on, we learn the power series definitions of sin and cos. How can one prove that these two definitions are equivalent?
As a rough outline, the circular definitions of sine and cosine (the y- and x-coordinates of the image of (1,0) under a rotation about the origin) lead to being able to differentiate sine and cosine, and once you know how to differentiate them (infinitely), Taylor's Theorem justifies that the power series is equal to the function.
|geometry|calculus|trigonometry|
0
Fourier transform for dummies
What is the Fourier transform? What does it do? Why is it useful (in math, in engineering, physics, etc)? This question is based on the question of Kevin Lin , which didn't quite fit in Mathoverflow. Answers at any level of sophistication are welcome.
Let me partially steal from the accepted answer on MO, and illustrate it with examples I understand: The Fourier transform is a different representation that makes convolutions easy. Or, to quote directly from there: "the Fourier transform is a unitary change of basis for functions (or distributions) that diagonalizes all convolution operators." This often involves expressing an arbitrary function as a superposition of "symmetric" functions of some sort, say functions of the form e itx — in the common signal-processing applications, an arbitrary "signal" is decomposed as a superposition of "waves" (or "frequencies"). Example 1: Polynomial multiplication This is the use of the discrete Fourier transform I'm most familiar with. Suppose you want to multiply two polynomials of degree n, given by their coefficients (a 0 , …, a n ) and (b 0 , …, b n ). In their product, the coefficient of x k is c k = ∑a i b k-i . This is a convolution, and doing it naively would take O(n 2 ) time. Instead,
|fourier-analysis|fourier-transform|
0
How do I convert from Cartesian to conical coordinates?
I have some polygons I would like to map onto the face of a cone. I can see from this page that I can convert the points of the polygon to cylindrical coordinates, which is almost what I want. How do I go about modifying the formulas to work for conical coordinates?
Presumably you already have the formulas for converting from conical to rectangular coordinates as listed on the Wikipedia page for conical coordinates . You'll need to solve for r, μ, and ν in terms of x, y, and z to get your answer. I can't see offhand the easiest way to find a general formula, but if you're trying to find it for particular values of r, μ, and ν, it shouldn't be too hard.
|geometry|coordinate-systems|
1
Different definitions of trigonometric functions
In school, we learn that sin is "opposite over hypotenuse" and cos is "adjacent over hypotenuse". Later on, we learn the power series definitions of sin and cos. How can one prove that these two definitions are equivalent?
Call the highschool functions (defined by the right triangle inscribed in a unit circle , the angle being equal to the length of the arc of the circle ) $\sin_h$ and $\cos_h$, and let $\sin_p$ and $\cos_p$ be the power series definitions. (Note that these functions are continuous and agree at the end points $0$ and $2\pi$). Since $\sin_p^2(\theta)+\cos_p^2(\theta)=1$ the power series definitions also form a right triangle. Hence $\sin_h = \sin_p \circ \gamma$ and $\cos_h = \cos_p \circ \gamma$ for some parameterization $\gamma$. We know the power series definitions satisfy the arc length criteria so $\gamma$ must be the identity function.
|geometry|calculus|trigonometry|
0
Learning Lambda Calculus
What are some good online/free resources (tutorials, guides, exercises, and the like) for learning Lambda Calculus? Specifically, I am interested in the following areas: Untyped lambda calculus Simply-typed lambda calculus Other typed lambda calculi Church's Theory of Types (I'm not sure where this fits in). (As I understand, this should provide a solid basis for the understanding of type theory.) Any advice and suggestions would be appreciated.
Recommendations: Barendregt & Barendsen, 1998, Introduction to lambda-calculus ; Girard, Lafont & Taylor, 1987, Proofs and Types ; Sørenson & Urzyczyn, 1999, Lectures on the Curry-Howard Isomorphism . All of these are mentioned in the LtU Getting Started thread .
|logic|learning|online-resources|lambda-calculus|type-theory|
0
Best Maths Books for Non-Mathematicians
I'm not a real Mathematician, just an enthusiast. I'm often in the situation where I want to learn some interesting Maths through a good book, but not through an actual Maths textbook. I'm also often trying to give people good Maths books to get them "hooked". So the question: What are good books, for laymen, which teach interesting Mathematics, but actually does it in a "real" way. For example, "Fermat's Last Enigma" doesn't count, since it doesn't actually feature any Maths, just a story, and most textbook don't count, since they don't feature a story. My favorite example of this is " Journey Through Genius ", which is a brilliant combination of interesting storytelling and large amounts of actual Mathematics. It took my love of Maths to a whole other level. Edit: A few more details on what I'm looking for. The audience of "laymen" should be anyone who has the ability (and desire) to understand actual mathematics, but does not want to learn from a textbook. Obviously I'm thinking abo
As a computer scientist with an interest in mathematics I liked the The Princeton Companion to Mathematics , though it is a heavy book and not always light reading.
|reference-request|soft-question|big-list|book-recommendation|
0
Usefulness of Conic Sections
Conic sections are a frequent target for dropping when attempting to make room for other topics in advanced algebra and precalculus courses. A common argument in favor of dropping them is that typical first-year calculus doesn't use conic sections at all. Do conic sections come up in typical intro-level undergraduate courses? In typical prelim grad-level courses? If so, where?
Conic sections should definitely be retained. If you don't cover conic sections, then what other examples can you cover? Lines? Too simple. General curves? Insufficiently concrete. Examples are very important for illustrating the general theory and techniques. Also, in a multivariable calculus course, typical examples will involve quadric surfaces. Here conic sections will come into play, since hyperplane sections (or "level curves") of quadric surfaces are conic sections.
|algebra-precalculus|conic-sections|education|
0
Perfect set without rationals
Give an example of a perfect set in $\mathbb R^n$ that does not contain any of the rationals. (Or prove that it does not exist).
An easy example comes from the fact that a number with an infinite continued fraction expansion is irrational (and conversely). The set of all irrationals with continued fractions consisting only of 1's and 2's in any arrangement is a perfect set of irrational numbers.
|real-analysis|general-topology|examples-counterexamples|
0
Usage of dx in Integrals
All the integrals I'm familiar with have the form: $\int f(x)\mathrm{d}x$. And I understand these as the sum of infinite tiny rectangles with an area of: $f(x_i)\cdot\mathrm{d}x$. Is it valid to have integrals that do not have a differential, such as $\mathrm{d}x$, or that have the differential elsewhere than as a factor ? Let me give couple of examples on what I'm thinking of: $\int 1$ If this is valid notation, I'd expect it to sum infinite ones together, thus to go inifinity. $\int e^{\mathrm{d}x}$ Again, I'd expect this to go to infinity as $e^0 = 1$, assuming the notation is valid. $\int (e^{\mathrm{d}x} - 1)$ This I could potentially imagine to have a finite value. Are any such integrals valid? If so, are there any interesting / enlightening examples of such integrals?
When you write $\int f(x) dx$, the whole of $\int ... dx$ is an indivisible symbol, just as the $d/dx$ is an indivisible symbol when you write $df/dx$. Of course, there are reasons why the notation is as it is, but trying to manipulate it like you suggest in $\int e^{dx}$, for example, is simply meaningless.
|calculus|notation|
0
Packing boxes and proof of Riemann Hypothesis
From Scott Aaronson's blog : There’s a finite (and not unimaginably-large) set of boxes, such that if we knew how to pack those boxes into the trunk of your car, then we’d also know a proof of the Riemann Hypothesis. Indeed, every formal proof of the Riemann Hypothesis with at most (say) a million symbols corresponds to some way of packing the boxes into your trunk, and vice versa. Furthermore, a list of the boxes and their dimensions can be feasibly written down. His later commented to explain where he get this from: "3-dimensional bin-packing is NP-complete." I don't see how these two are related. Another question inspired by the same article is here .
The question of whether a formal proof of the Riemann Hypothesis exists (with at most a million symbols) is a problem in NP: given such a proof, it can be verified to be correct in polynomial time. Bin-packing is NP-complete: this means that every problem in NP can be reduced to bin packing. In particular, the problem mentioned in the previous paragraph can. (This is a reduction that can be made explicit, so once we specify the proof verifier etc., we can carry out the steps of the reduction to get an instance of bin packing. We also need the reduction to be "parsimonious" i.e. solutions correspond one-to-one; I believe it is.)
|computer-science|proof-theory|packing-problem|riemann-hypothesis|
1
Intuitive explanation of covariant, contravariant and Lie derivatives
I would be glad if someone could explain in intuitive terms what these different derivatives are, and possibly give some practical, understandable examples of how they would produce different results. To be clear, I would like to understand the geometrical or physical meaning of these operators more than the mathematical or topological subtleties that lead to them! Thanks!
The Lie derivative is a derivative of a vector field V along another vector field W. It is defined at a point p as follows: flow the point p along W for some time t and look at the value of V at this point. Then push this forward along the flow of W to a vector at p. Subtract $V_p$ from this, divide by t, and take the limit as $t \to 0$. So this is a measure of how V changes as it gets pushed around by the flow of W. The covariant derivative is a derivative of a vector field V along a vector W. Unlike the Lie derivative, this does not come for free: we need a connection, which is a way of identifying tangent spaces. The reason we need this extra data is because if we wanted to take the directional derivative of V along the vector W how we do in Euclidean space, we would be taking something like $V_{p+tW} - V_p$, which is the difference of vectors living in different tangent spaces. If we have a metric, then we can impose reasonable conditions that give us a unique connection (the Levi-
|differential-geometry|intuition|lie-derivative|
1
Usage of dx in Integrals
All the integrals I'm familiar with have the form: $\int f(x)\mathrm{d}x$. And I understand these as the sum of infinite tiny rectangles with an area of: $f(x_i)\cdot\mathrm{d}x$. Is it valid to have integrals that do not have a differential, such as $\mathrm{d}x$, or that have the differential elsewhere than as a factor ? Let me give couple of examples on what I'm thinking of: $\int 1$ If this is valid notation, I'd expect it to sum infinite ones together, thus to go inifinity. $\int e^{\mathrm{d}x}$ Again, I'd expect this to go to infinity as $e^0 = 1$, assuming the notation is valid. $\int (e^{\mathrm{d}x} - 1)$ This I could potentially imagine to have a finite value. Are any such integrals valid? If so, are there any interesting / enlightening examples of such integrals?
I think your question here shows that, while you have been using these symbols, you haven't really been given a proper motivation for where they came from. Let's go back and consider how we came up with the idea of an integral. In a typical class, you will see a lot of pictures like this: We find the area under the curve by summing up the area of all these little rectangles. If we wanted to write an expression for the area, it would look like: $$ \sum_{i=1}^{n}f(x_i)\Delta x $$ The Σ means that we are computing a sum. We are adding the areas of the rectangles, which we have numbered 1 through n, to get the complete area under the curve. The area of each rectangle is given by multiplying the height by the width. The height is given by f(x i ) because the base of the rectangle is at 0, and the top of the rectangle is where it meets the function f. The Δx represents the width of each rectangle. When we find the integral, we are taking the limit of this sum as the number of rectangles goes
|calculus|notation|
0
Different definitions of trigonometric functions
In school, we learn that sin is "opposite over hypotenuse" and cos is "adjacent over hypotenuse". Later on, we learn the power series definitions of sin and cos. How can one prove that these two definitions are equivalent?
There is another proof that the derivative of sine is cosine that doesn't use the sandwich theorem mentioned by Qiaochu and Akhil above. Instead, one can use the definition of arcsine and the standard calculus formula for arc length in terms of an integral to show that arcsine = the integral of (1 - x^2)^(-.5). It follows that the derivative of arcsine is (1 - x^2)^(-.5), and (by the chain rule) one can use this fact to prove that the derivative of sine is cosine. In fact, I'm not sure why this proof is presented less frequently then the one via the sandwich theorem. The unit circle definition of sine is based on arc length, and in calculus we learn a formula for arc length based on integration. Why not connect these two concepts for a natural proof that the derivative of sine is cosine?
|geometry|calculus|trigonometry|
0
Sum of two periodic functions
Let $f$ and $g$ be two periodic functions over $\Bbb{R}$ with the following property: If $T$ is a period of $f$, and $S$ is a period of $g$, then $T/S$ is irrational. Conjecture : $f+g$ is not periodic. Could you give a proof or a counter example? It is easier if we assume continuity. But is it true for arbitrary real valued functions?
Pick a basis $B$ of $\mathbb R$ as a $\mathbb Q$ vector space, and split it into two non-empty disjoint parts $B_1$ and $B_2$. Define $\mathbb Q$-linear maps $f,g:\mathbb R\to\mathbb R$ such that $f(x)=x$ and $g(x)=0$ if $x\in B_1$, $f(x)=0$ and $g(x)=x$ if $x\in B_2$. Then $f(x)+g(x)=x$ for all $x\in B$, so that in fact $f+g=\operatorname{id}_{\mathbb R}$, which is not a periodic function. Morever $f$ and $g$ are periodic, and their sets of periods are precisely $B_1$ and $B_2$. Since $B_1\cup B_2$ is linearly independent over $\mathbb Q$, it is easy to see that $x/y\not\in\mathbb Q$ whenever $x\in B_1$ and $y\in B_2$. This is then an example where the sum is not periodic.
|analysis|
0
Sum of two periodic functions
Let $f$ and $g$ be two periodic functions over $\Bbb{R}$ with the following property: If $T$ is a period of $f$, and $S$ is a period of $g$, then $T/S$ is irrational. Conjecture : $f+g$ is not periodic. Could you give a proof or a counter example? It is easier if we assume continuity. But is it true for arbitrary real valued functions?
Here is a counterexample. Let $a, b, c \in \mathbb{R}$ be linearly independent over $\mathbb{Q}$. Let $\text{span}(x, y, z, ...)$ be the $\mathbb{Q}$-vector space in $\mathbb{R}$ spanned by $x, y, z, ...$. Let $AB = \text{span}(a, b), BC = \text{span}(b, c), AC = \text{span}(a, c)$. And for a subset $S$ of $\mathbb{R}$, let $\chi_S$ denote the characteristic function of $S$. Now define $\displaystyle f(x) = \chi_{AB} - 2 \chi_{BC}$ and $\displaystyle g(x) = 3 \chi_{AC} + 2 \chi_{BC}.$ Then $f$ has period set $\text{span}(b)$, $g$ has period set $\text{span}(c)$, and $f + g$ has period set $\text{span}(a)$. (I am not sure if the coefficients are necessary; they're just precautions.) Are you still interested in the continuous case? (Old answer below. I slightly misunderstood the question when I wrote this.) Here is a simpler example. I claim that the function $h(x) = \sin x + \sin \pi x$ cannot possibly be periodic. Why? Suppose an equation of the form $\sin x + \sin \pi x = \sin (x+T) +
|analysis|
1
What's an intuitive way to think about the determinant?
In my linear algebra class, we just talked about determinants. So far I’ve been understanding the material okay, but now I’m very confused. I get that when the determinant is zero, the matrix doesn’t have an inverse. I can find the determinant of a $2\times 2$ matrix by the formula. Our teacher showed us how to compute the determinant of an $n \times n$ matrix by breaking it up into the determinants of smaller matrices. Apparently there is a way by summing over a bunch of permutations. But the notation is really hard for me and I don’t really know what’s going on with them anymore. Can someone help me figure out what a determinant is, intuitively, and how all those definitions of it are related?
The top exterior power of an $n$-dimensional vector space $V$ is one-dimensional. Its elements are sometimes called pseudoscalars, and they represent oriented $n$-dimensional volume elements. A linear operator $f$ on $V$ can be extended to a linear map on the exterior algebra according to the rules $f(\alpha) = \alpha$ for $\alpha$ a scalar and $f(A \wedge B) = f(A) \wedge f(B), f(A + B) = f(A) + f(B)$ for $A$ and $B$ blades of arbitrary grade. Trivia: some authors call this extension an outermorphism . The extended map will be grade-preserving; that is, if $A$ is a homogeneous element of the exterior algebra of grade $m$, then $f(A)$ will also have grade $m$. (This can be verified from the properties of the extended map I just listed.) All this implies that a linear map on the exterior algebra of $V$ once restricted to the top exterior power reduces to multiplication by a constant: the determinant of the original linear transformation. Since pseudoscalars represent oriented volume ele
|linear-algebra|matrices|determinant|intuition|
0
Best Algebraic Geometry text book? (other than Hartshorne)
Lifted from Mathoverflow : I think (almost) everyone agrees that Hartshorne's Algebraic Geometry is still the best. Then what might be the 2nd best? It can be a book, preprint, online lecture note, webpage, etc. One suggestion per answer please. Also, please include an explanation of why you like the book, or what makes it unique or useful.
Before Hartshorne's book there was Mumford's Red Book of Varieties . I think it is a great introductory textbook to modern algebraic geometry (scheme theory). I found that Mumford is quite good at motivating new concepts; in particular I really enjoy his development of nonsingularity and the sheaf of differentials. I think another great aspect about this book is that it emphasizes how to define things intrinsically (i.e. without reference to a closed or open immersion into affine space) but also explains how to make local arguments (i.e. using immersion into affine space). A classic example of the above: (non intrinsic tangent space): Say X is a variety and p is a point of X. Choose an affine neighborhood so that p corresponds to the origin. Then this affine neighborhood is spec k[x1, ..., xn]/I for some ideal. Let I' be all the linear terms of I (i.e. if I = (x,y^2), then I' = (x)). Then the tangent space at p is spec k[x1,...,xn]/I'. (intrinsic tangent space): Let m be the maximal id
|big-list|reference-request|algebraic-geometry|
0
Best Algebraic Geometry text book? (other than Hartshorne)
Lifted from Mathoverflow : I think (almost) everyone agrees that Hartshorne's Algebraic Geometry is still the best. Then what might be the 2nd best? It can be a book, preprint, online lecture note, webpage, etc. One suggestion per answer please. Also, please include an explanation of why you like the book, or what makes it unique or useful.
Another book I wish I had known about when I was first reading Hartshorne is Miranda's Complex Algebraic Curves . Again this book covers much less then Hartshorne and only discusses curves over the complex numbers (and their Jacobians). But it gives a lot more details and examples of concepts which I found particularly difficult when I first started learning algebraic geometry (sheafs, divisors, cohomology). It also has a bunch of exercises which I think are often not as challenging as the the exercises in Hartshorne. It also covers a lot more of the 'classical' theory of curves than Hartshrone does; e.g. Weierstrass points.
|big-list|reference-request|algebraic-geometry|
0
Applications of the Fibonacci sequence
The Fibonacci sequence is very well known, and is often explained with a story about how many rabbits there are after $n$ generations if they each produce a new pair every generation. Is there any other reason you would care about the Fibonacci sequence?
They're a much easier way to evaluate the function $f(x) = \dfrac{\varphi^x -(-\varphi)^x}{\sqrt{5}}$ by hand, where $\varphi$ is the golden ratio and $x$ is an integer :)
|combinatorics|big-list|applications|fibonacci-numbers|
0
Best Algebraic Geometry text book? (other than Hartshorne)
Lifted from Mathoverflow : I think (almost) everyone agrees that Hartshorne's Algebraic Geometry is still the best. Then what might be the 2nd best? It can be a book, preprint, online lecture note, webpage, etc. One suggestion per answer please. Also, please include an explanation of why you like the book, or what makes it unique or useful.
My last suggestion would be Ravi Vakil's online notes on the foundations of algebraic geometry . I think these notes might be made into a full on textbook someday. I haven't looked through all of them but these notes seem to cover as much as Hartshorne does (if not more). Only rarely do Hartshorne and Vakil define things differently (`projective morphisms' is the only example that comes to mind). I've heard it said that Hartshorne's book is a `baby' version of EGA. I think Vakil's notes are somewhere between Hartshorne and EGA (probably not the midpoint though). At least Vakil discusses much more the theory of representable functors, and Noetherian hypothesis are less prevalent in Vakil's notes. Also Vakil's notes are more complete in that they also include proofs of many of the commutative algebra results that are just stated in Hartshorne. I think Vakil spends a lot more time motivating the material and often the notes are a bit conversational. Also there are tons of exercises and mo
|big-list|reference-request|algebraic-geometry|
0
A challenge by R. P. Feynman: give counter-intuitive theorems that can be translated into everyday language
The following is a quote from Surely you're joking, Mr. Feynman . The question is: are there any interesting theorems that you think would be a good example to tell Richard Feynman, as an answer to his challenge? Theorems should be totally counter-intuitive, and be easily translatable to everyday language. (Apparently the Banach-Tarski paradox was not a good example.) Then I got an idea. I challenged them: "I bet there isn't a single theorem that you can tell me - what the assumptions are and what the theorem is in terms I can understand - where I can't tell you right away whether it's true or false." It often went like this: They would explain to me, "You've got an orange, OK? Now you cut the orange into a finite number of pieces, put it back together, and it's as big as the sun. True or false?" "No holes." "Impossible! "Ha! Everybody gather around! It's So-and-so's theorem of immeasurable measure!" Just when they think they've got me, I remind them, "But you said an orange! You can't
For any five points on the globe, there is an angle in outer space from which you could see at least 4 of the 5 points (assuming the moon or anything isn't in the way). The proof is pretty simple, too...
|soft-question|big-list|examples-counterexamples|
0
Is there a geometrical interpretation to the notion of eigenvector and eigenvalues?
The wiki article on eigenvectors offers the following geometrical interpretation: Each application of the matrix to an arbitrary vector yields a result which will have rotated towards the eigenvector with the largest eigenvalue. Qn 1: If there is any other geometrical interpretation particularly in the context of a covariance matrix? The wiki also discusses the difference between left and right eigenvectors. Qn 2: Do the above geometrical interpretations hold irrespective of whether they are left or right eigenvectors?
Of course! Consider a coordinate transformation of rotation and/or scaling (but not translation): v = Au where v and u are vectors, and A is a transformation matrix. Then the eigenvectors, if they have real components, are the axes which are left unrotated (scaling only) by the transformation. (see wikipedia ) A covariance matrix is a symmetric, positive definite matrix, so it has orthonormal eigenvectors, and these form a tuple of axes; I am fairly sure the eigenvectors form a new basis of linear combinations of the input variables where the basis variables are uncorrelated, but I can't remember how to show this. For example, if w1 = [x;y] is a pair of independent unit-variance zero-mean Gaussian random variables, consider w2 = [u;v] = [1 1; 2 1][x;y] = (x+y,2x+y), so that w1 = [-1 1;2 -1][u;v] = [v-u;2u-v]. Then cov(w2) = [2 3; 3 5]. This has eigenvectors which have sqrt(5) in them, hmmmm... As for question 2, I'm not sure.
|linear-algebra|eigenvalues-eigenvectors|intuition|
0
Intuitive understanding of the derivatives of $\sin x$ and $\cos x$
One of the first things ever taught in a differential calculus class: The derivative of $\sin x$ is $\cos x$. The derivative of $\cos x$ is $-\sin x$. This leads to a rather neat (and convenient?) chain of derivatives: sin(x) cos(x) -sin(x) -cos(x) sin(x) ... An analysis of the shape of their graphs confirms some points; for example, when $\sin x$ is at a maximum, $\cos x$ is zero and moving downwards; when $\cos x$ is at a maximum, $\sin x$ is zero and moving upwards. But these "matching points" only work for multiples of $\pi/4$. Let us move back towards the original definition(s) of sine and cosine: At the most basic level, $\sin x$ is defined as -- for a right triangle with internal angle $x$ -- the length of the side opposite of the angle divided by the hypotenuse of the triangle. To generalize this to the domain of all real numbers, $\sin x$ was then defined as the Y-coordinate of a point on the unit circle that is an angle $x$ from the positive X-axis. The definition of $\cos x$
Perhaps the following diagram will provide insight: The idea is to look at the sine and cosine curves as projections of a helix drawn on a cylinder. If you look at the cylinder itself as a curled planar square of length $2\pi$ , then helix is a curled version of the square's diagonal. A tangent vector along the flat square's diagonal always lies at 45 degrees to the square's sides, say with length-"1" shadows in each direction; after smoothly curling the square into the cylinder, the tangent vector lies at 45 degrees to the cylinder's ( $z$ -)axis and the perpendicular ( $xy$ -)plane. Projecting the helix into the $zy$ - and $zx$ -planes gives graphs of sine and cosine. Projecting the helix's tangent vector gives tangent vectors to those graphs. The " $\mathrm dz$ "s for these projected tangents are always $1$ (the "vertical" shadow of the helix's tangent vector). To get at " $\mathrm dy$ " and " $\mathrm dx$ " (" $v_x$ " and " $v_y$ " in the diagram) we project down into the $xy$ -pla
|calculus|trigonometry|
1
Best Algebraic Geometry text book? (other than Hartshorne)
Lifted from Mathoverflow : I think (almost) everyone agrees that Hartshorne's Algebraic Geometry is still the best. Then what might be the 2nd best? It can be a book, preprint, online lecture note, webpage, etc. One suggestion per answer please. Also, please include an explanation of why you like the book, or what makes it unique or useful.
for Undergraduate algebraic geometry (significantly below the level of Hartshorne), Cox, Little and O'Shea's Ideals, Varieties, and Algorithms is a pleasant treatment.
|big-list|reference-request|algebraic-geometry|
0
Where is the flaw in this "proof" that 1=2? (Derivative of repeated addition)
Consider the following: $1 = 1^2$ $2 + 2 = 2^2$ $3 + 3 + 3 = 3^2$ Therefore, $\underbrace{x + x + x + \ldots + x}_{x \textrm{ times}}= x^2$ Take the derivative of lhs and rhs and we get: $\underbrace{1 + 1 + 1 + \ldots + 1}_{x \textrm{ times}} = 2x$ Which simplifies to: $x = 2x$ and hence $1 = 2$. Clearly something is wrong but I am unable pinpoint my mistake.
You cannot differentiate the LHS of your equation $x + x + x + \cdots$ (repeated $x$ times) = $x^2$ This is because the LHS is not a continuous function; the number of terms depends on $x$ so the LHS is not well defined when $x$ is not an integer. We can only differentiate continuous functions, so this is not valid.
|calculus|recreational-mathematics|fake-proofs|faq|
0
Applications of the Fibonacci sequence
The Fibonacci sequence is very well known, and is often explained with a story about how many rabbits there are after $n$ generations if they each produce a new pair every generation. Is there any other reason you would care about the Fibonacci sequence?
How many ways are there to tile a $2 \times n$ grid with $1 \times 2$ dominos? This problem can be solved using Fibonacci numbers. Let S n be the number of valid tilings of the 2 x n grid. Each such tiling has either a vertical 1 x 2 domino or two horizontal dominos on the right. Therefore, each tiling of a 2 x (n-1) grid or a 2 x (n-2) grid generates a tiling of the 2 x n grid, and hence we have a recurrence relation S n = S n-1 + S n-2 . This is precisely the recurrence relation of the Fibonacci numbers. Checking our base cases, we see that there is one way to tile a 1 x 2 grid and two ways to tile a 2 x 2 grid, so S 1 = 1 and S 2 = 2. Therefore, the number of tilings is precisely the Fibonacci sequence.
|combinatorics|big-list|applications|fibonacci-numbers|
0
Different definitions of trigonometric functions
In school, we learn that sin is "opposite over hypotenuse" and cos is "adjacent over hypotenuse". Later on, we learn the power series definitions of sin and cos. How can one prove that these two definitions are equivalent?
If you allow yourself a tiny bit of calculus ( "$\sin x / x \to 1$" as "$x \to 0$" ) and apply some combinatorics, there's a really nice geometric interpretation of the terms of the power series for the functions. Consider this diagram and the polygonal "spiral" that starts at $P_0$ and closes in on the point $P$ (where $|P_0 P| = 1$). The horizontal segments $P_{2n} P_{2n+1}$ alternately overshoot and undershoot the length of the cosine segment; the vertical segments $P_{2n+1} P_{2n+2}$ do the same for the sine segment. So, $\cos \theta = \sum_{n=0}^{\infty}(-1)^n | P_{2n} P_{2n+1} |$ $\sin \theta = \sum_{n=0}^{\infty} (-1)^n | P_{2n+1} P_{2n+2} |$ Now, the lengths $|P_{k} P_{k+1}|$ are equal to the lengths of the curves $|I_k|$, which constitute a series of successive involutes (with $I_0$ defined to be a segment, and $I_1$ defined to be an arc of the unit circle). Combinatorics and the calculus result I mentioned show that the involute lengths satisfy ... $|I_k| = \theta^k / k!$ ...
|geometry|calculus|trigonometry|
0
Where is the flaw in this "proof" that 1=2? (Derivative of repeated addition)
Consider the following: $1 = 1^2$ $2 + 2 = 2^2$ $3 + 3 + 3 = 3^2$ Therefore, $\underbrace{x + x + x + \ldots + x}_{x \textrm{ times}}= x^2$ Take the derivative of lhs and rhs and we get: $\underbrace{1 + 1 + 1 + \ldots + 1}_{x \textrm{ times}} = 2x$ Which simplifies to: $x = 2x$ and hence $1 = 2$. Clearly something is wrong but I am unable pinpoint my mistake.
Here's my explanation from an old sci.math post: Zachary Turner wrote on 26 Jul 2002: Let D = d/dx = derivative wrt x. Then D[x^2] = D[x + x + ... + x (x times)] = D[x] + D[x] + ... + D[x] (x times) = 1 + 1 + ... + 1 (x times) = x An obvious analogous fallacious argument proves both $ $ D[x f(x)] = Df(x) (x times) = x Df(x) $ $ D[x f(x)] = Dx (f(x) times) = f(x), via Dx = 1 vs. the correct result: their sum $\rm\:f(x) + x\, Df(x)\:$ as given by the Leibniz product rule (= chain rule for times). The error arises from overlooking the dependence upon x in both arguments of the product $\rm\: x \ f(x)\:$ when applying the chain rule. The source of the error becomes clearer if we consider a discrete analog. This will also eliminate any tangential concerns on the meaning of "(x times)" for non-integer x. Namely, we consider the shift operator $\rm\ S:\, n \to n+1\ $ on polynomials $\rm\:p(n)\:$ with integer coefficients, where $\rm\:S p(n) = p(n+1).\:$ Here is a similar fallacy S[n^2] = S[n
|calculus|recreational-mathematics|fake-proofs|faq|
0
Logic problem: Identifying poisoned wines out of a sample, minimizing test subjects with constraints
I just got out from my Math and Logic class with my friend. During the lecture, a well-known math/logic puzzle was presented: The King has $1000$ wines, $1$ of which is poisoned. He needs to identify the poisoned wine as soon as possible, and with the least resources, so he hires the protagonist, a Mathematician. The king offers you his expendable servants to help you test which wine is poisoned. The poisoned wine is very potent, so much that one molecule of the wine will cause anyone who drinks it to die. However, it is slow-acting. The nature of the slow-acting poison means that there is only time to test one "drink" per servant. (A drink may be a mixture of any number of wines) (Assume that the King needs to know within an hour, and that any poison in the drink takes an hour to show any symptoms) What is the minimum amount of servants you would need to identify the poisoned wine? With enough time and reasoning, one can eventually see that this requires at most ten ( $10$ ) servants
I asked this question on MathOverflow and got a great answer there. For $k = 2$ I can do it with ${\lceil \log_2 N \rceil + 2 \choose 2} - 1$ servants. In particular for $N = 1000$ I can do it with $65$ servants. The proof is somewhat long, so I don't want to post it until I've thought about the problem more. I haven't been able to improve on the above result. Here's how it works. Let $n = \lceil \log_2 N \rceil$. Let me go through the algorithm for $k = 1$ so we're all on the same page. Number the wines and assign each of them the binary expansion of their number, which consists of $n$ bits. Find $n$ servants, and have servant $i$ drink all the wines whose $i^{th}$ bit is $1$. Then the set of servants that die tells you the binary expansion of the poisoned wine. For $k = 2$ we need to find $n$ butlers, $n$ maids, and ${n \choose 2}$ cooks. The cooks will be named $(i, j)$ for some positive integers $1 \le i If both butler $i$ and maid $i$ die, then one of the poisoned wines has $i^{th
|puzzle|recreational-mathematics|
1
Mandelbrot-like sets for functions other than $f(z)=z^2+c$?
Are there any well-studied analogs to the Mandelbrot set using functions other than $f(z)= z^2+c$ in $\mathbb{C}$?
I have no idea about well-studied, but here's what I've gotten from a bit of playing around ( here's the relevant Mathematica code ). $\boldsymbol{f_c(z)=\cos(z)+c}$ where $\cos(z)$ is defined however Mathematica defines it, using the escape time algorithm with escape radius $10\pi$ and 100 iterations maximum: $\boldsymbol{f_c(z)=\sin(z)+c}$ appears to be the same as cosine, but horizontally translated. $\boldsymbol{f_c(z)=e^z+c}$, using the escape time algorithm with escape radius 50 and 100 iterations maximum (it's vertically periodic with period $2\pi$): edit (Mathematica code edited, too): $\boldsymbol{f_c(z)=cz(1-z)}$ (since camccann's answer mentions logistic maps) using the escape time algorithm with escape radius 50 and 200 iterations maximum:
|fractals|dynamical-systems|
0
Mandelbrot-like sets for functions other than $f(z)=z^2+c$?
Are there any well-studied analogs to the Mandelbrot set using functions other than $f(z)= z^2+c$ in $\mathbb{C}$?
A variant on the M-set can be defined in straightforward fashion for any iterated function in the complex plane parameterized by a single initial value. For instance, slight modifications produce the tricorn and burning ship fractals. However, most such variations tend to be either boring, incoherent, or obviously derived from the Mandelbrot set--nothing particularly novel. Some obvious patterns also emerge quickly from many variations: Real exponents alter symmetry, imaginary exponents cause asymmetric twisting, disreputable functions produce misshapen lumps like the burning ship, and so on. On the other hand, it's more difficult than you might expect to avoid the M-set in the first place: For a well known example, consider the Newton-Raphson method for approximating roots , which can be generalized to the complex plane in straightforward fashion. For some polynomial, a point in the complex plane may or may not converge to a particular root after some number of Newton-Raphson iteratio
|fractals|dynamical-systems|
1
Why should one expect valuations to be related to primes? How to treat an infinite place algebraically?
I understand the mechanics of the proof of Ostrowski's Theorem, but I'm a little unclear on why one should expect valuations to be related to primes. Is this a special property of number fields and function fields, or do primes of K[x,y] correspond to valuations on K(x,y) in the same way? I'm hoping for an answer that can explain what exactly are the algebraic analogs of archimedian valuations, and how to use them - for example, I've heard that the infinite place on K(x) corresponds to the "prime (1/x)" - how does one take a polynomial in K[x] "mod (1/x)" rigorously? Thanks in advance.
I couldn't divine much information on your background (e.g. undergraduate, master's level, PhD student...) from the question, but I recently taught an intermediate level graduate course which had a unit on valuation theory. Sections 1.6 through 1.8 of http://alpha.math.uga.edu/~pete/8410Chapter1.pdf address your questions. In particular, if your field $K$ is the fraction field of a Dedekind domain $R$ , then you can always use each prime ideal $\mathfrak{p}$ of $R$ to define a valuation on $K$ , essentially the "order at $\mathfrak{p}$ ". There is also a converse result, Theorem 13: if you have a valuation on $K$ which has the additional property that it is non-negative at every element of the Dedekind domain $R$ , then it has to be (up to equivalence) the $\mathfrak{p}$ -adic valuation for some $\mathfrak{p}$ . I felt the need to give this additional condition a name, so I called such a valuation R-regular . The point is that (as Qiaochu says in his comments), in case $K$ is a number
|number-theory|algebraic-geometry|
1
Why should one expect valuations to be related to primes? How to treat an infinite place algebraically?
I understand the mechanics of the proof of Ostrowski's Theorem, but I'm a little unclear on why one should expect valuations to be related to primes. Is this a special property of number fields and function fields, or do primes of K[x,y] correspond to valuations on K(x,y) in the same way? I'm hoping for an answer that can explain what exactly are the algebraic analogs of archimedian valuations, and how to use them - for example, I've heard that the infinite place on K(x) corresponds to the "prime (1/x)" - how does one take a polynomial in K[x] "mod (1/x)" rigorously? Thanks in advance.
Discrete valuations points on a curve For a nonsingular projective curve over an algebraically closed field, there is a one-one correspondence between the points on it, and the discrete valuations of the function field (i.e. all the meromorphic functions of the curve). The correspondence is point P -> the valuation that sends a function f, to the order of zero/pole of f at P. Maximal ideals points on a curve At least for varieties (common zeros of several polynomials) over an algebraically closed field, there is a one-one correspondence between points on it, and the maximal ideal in $k[x_1,\cdots,x_n]$. The correspondence is point $P = (a_1,\cdots,a_n)$ -> the polynomials vanishing at P, which turns out to be $(x_1-a_1,\cdots,x_n-a_n)$. This is something true not only for curves, but for varieties. (Hilbert's Nullstellensatz) So putting these together, for nonsingular projective curves over an algebraically closed field, you know that there is a one-one correspondence between the maxim
|number-theory|algebraic-geometry|
0
Find all $x$ for that $x^2 + (x+1)^2$ is a square
How to find all natural $x$ for that $x^2 + (x+1)^2$ is a perfect square?
Suppose $x^2 + (x+1)^2 = y^2$. We can rewrite it as $(2x+1)^2 + 1 = 2y^2$ or $(2x+1)^2 - 2y^2 = -1$. If $z=2x+1$ then we have $z^2 - 2y^2 = -1$. This is Pell's equation . Wikipedia article shows how to solve it.
|elementary-number-theory|
1
Characterising functions $f$ that can be written as $f = g \circ g$?
I'd like to characterise the functions that ‘have square roots’ in the function composition sense. That is, can a given function $f$ be written as $f = g \circ g$ (where $\circ$ is function composition)? For instance, the function $f(x) = x+10$ has a square root $g(x) = x+5$. Similarly, the function $f(x) = 9x$ has a square root $g(x) = 3x$. I don't know if the function $f(x) = x^2 + 1$ has a square root, but I couldn't think of any. Is there a way to determine which functions have square roots? To keep things simpler, I'd be happy just to consider functions $f: \mathbb R \to \mathbb R$.
I showed you the link to the MO question mostly to convince you that this is a hard question. I will "answer" it in the special case that $f$ is a bijection. Recall that given a bijection $f : S \to S$, where $S$ is a set, a cycle of $f$ length $n$ is a set of distinct points $x, f(x), ... f^{n-1}(x)$ such that $f^n(x) = x$. A cycle of infinite length is a set of distinct points $x, f(x), f^2(x), ...$. It is not hard to see that $S$ is a disjoint union of cycles of $f$. Claim: A bijection $f : S \to S$ has a square root if and only if there are an even number of cycles of $f$ of any given even length. (For the purposes of this result, infinity is an even number; so there can be an infinite number of cycles, and you need to consider cycles of infinite length.) Proof. First we show that any bijection with a square root has this property. Let $g : S \to S$ be a bijection such that $g(g(x)) = f(x)$. Then each cycle of $g$ corresponds to either one or two cycles of $f$, as follows. If the c
|functions|elementary-set-theory|function-and-relation-composition|
1
For any prime $p > 3$, why is $p^2-1$ always divisible by $24$?
Let $p>3$ be a prime. Prove that $24 \mid p^2-1$ . I know this is very basic and old hat to many, but I love this question and I am interested in seeing whether there are any proofs beyond the two I already know.
Here's a very simplistic proof: $n^2 = 1 \pmod{24}$ for $n=1,5,7,11$, by checking each case individually. $(n+12)^2 = n^2 + 24n + 144 = n^2 \pmod{24}$. Therefore, $n^2 = 1 \pmod{24}$ when $n$ is odd and not divisible by $3$, and so $n^2-1$ is divisible by $24$ for these $n$. You don't need primality of $p$ here! A slight modification would be to use $1$ and $5$ as "base cases", and use the fact that $(n+6)^2 = n^2 + 12n + 36 = n^2 + 12(n+3)$, which is equal to $n^2 \pmod{24}$ when $(n+3)$ is even, i.e. $n$ is odd.
|elementary-number-theory|prime-numbers|divisibility|
0
Summing ${\frac{1}{n^2}}$ over subsets of $N$.
Are there $2$ subsets, say, $A$ and $B$, of the naturals such that $$\sum_{n\in A} f(n) = \sum_{n\in B} f(n)$$ where $f(n)={\frac{1}{n^2}}$? If $f(n)=\frac{1}{n}$ then there are many counterexamples, which is probably a consequence of the fact that the harmonic series diverges: $$\frac23 = \frac12 + \frac16 = \frac14+\frac13+\frac1{12}$$ And if $f(n)=b^{-n}$ for some base $b$ then it is true because for all $M$, $\sum_{n>M} f(n) bijection surjection $2^{N} \to [0,1])$. So we have sort of an in-between case here. Also, what if $A$,$B$: -are required to be finite sets? -are required to be infinite and disjoint?
Yes to both cases: 1) $\frac{1}{15^2}+\frac{1}{20^2}=\frac{1}{12^2}$ 2) $\frac{1}{15^2}+\frac{1}{150^2}+\frac{1}{1500^2}+...+\frac{1}{20^2}+\frac{1}{200^2}+\frac{1}{2000^2}+...=\frac{1}{12^2}+\frac{1}{120^2}+\frac{1}{1200^2}...$ for first case - if we have pythagorean triple (a,b,c), such that $a^2+b^2=c^2$, then: $\frac{1}{a^2 b^2}=\frac{1}{a^2 c^2}+\frac{1}{b^2 c^2}$
|sequences-and-series|
1
Nonprimes with $3^{n-1} \equiv 2^{n-1} \pmod n$
Is it true that there are infinitely many nonprime integers $n$ such that $3^{n-1} - 2^{n-1}$ is a multiple of $n$?
as Qiaochu Yuan pointed out, take a Carmichael number q ; by definition, 3 q -1 and 2 q -1 are both congruent to 1 mod q , so their difference is a multiple of q . Since Carmichael numbers are infinite, you are done.
|number-theory|
0
Watchdog Problem
I just came up with this problem yesterday. Problem : Assume there is an important segment of straight line AB that needs to be watched at all time. A watchdog can see in one direction in front of itself and must walk at a constant non-zero speed at all time. (All watchdogs don't need to have the same speed.) When it reaches the end of the segment, it must turn back (at no time) and keep watching the line. How many watchdogs does it need to guarantee that the line segment is watched at all time? And how (initial positions and speeds of the dogs)? Note : It's clear that two dogs are not enough. I conjecture that four will suffice and three will not. For example, the below configuration doesn't work from 7.5 second if AB 's length is 10 meters. Dog 1 at A walks to the right with speed 1.0 m/s Dog 2 at between A and B walks to the right with speed 1.0 m/s Dog 3 at B walks to the left with speed 1.0 m/s Or it can be illustrated as: A ---------------------------------------- B 0.0 sec 1 -->
Three dogs is enought I think. Let the length of line segment be equal to 1 (with coordinates from 0 to 1). First dog: start position = 0, speed = +1/3 Second dog: start position = 2/3, speed = +1/3 Third dog: start position = 2/3, speed = -1/3 After 1 second the position becomes similar.
|discrete-mathematics|recreational-mathematics|puzzle|word-problem|
1
Interpolating between volume preserving diffeomorphisms of sphere
I know volume preserving diffeomorphisms of a $sphere^2$ make a grou'p sdiff($S_2$). I would to know if it is a Lie group, which I assume if it is that makes interpolation easier (like with rotations). So that is one question, is it a Lie group? Also is the group path connected? If so, how can I interpolate between two elements in the group? These are not subjects I know very little about. I apologize if I'm phrasing it in some way that sounds ridiculous.
The term "volume preserving" sounds a bit ambiguous to me: do you mean that your map preserves the total volume or do you mean that its differential at every point preserves volume (i.e. has determinant 1)? The former is weaker than the latter, and gives you more room for interpolation. In any case, there is a famous invariant of continuous maps $S^2\to S^2$ called the degree . Any two maps with the same degree are homotopic to each other. Being volume preserving (in the former sense) implies that the degree is $1$ (taking orientation into account!), so you can interpolate between any two volume preserving maps. However , the intermediate maps in this line of reasoning are only continuous, not necessarily diffeomorphisms. I'm confident that with a standard argument "approximate continuous functions by differentiable ones" you can get them to be differentiable, but I don't about "is diffeomorphism" and "is locally volume preserving" parts.
|geometry|abstract-algebra|
0