title
string
question_body
string
answer_body
string
tags
string
accepted
int64
How can there be explicit polynomial equations for which the existence of integer solutions is unprovable?
This answer suggests that there are explicit polynomial equations for which the existence (or nonexistence) of integer solutions is unprovable. How can this be?
Edit: This has been explained better by Joel David Hamkins; cf. this MO thread . This is because given any first-order formula $\phi(x_1, \dots, x_n)$ with $n$ parameters in the integers, there is a polynomial $P(x_1, \dots, x_n, z_1, \dots, z_m)$ which can be proved to have a root in $z_1, \dots, z_m$ in the integers if and only if $\phi(x_1, \dots, x_n)$---this is a version of Matiyasevich's solution to Hilbert's tenth problem. Now, it's possible to construct within any formal system explicit first-order formulas which can't be proved or disproved (for instance, the statement "this formal system is consistent"). This is the second incompleteness theorem. These first-order formulas correspond to polynomials by the first paragraph. In particular, one can show that you can construct a polynomial $P(z_1, \dots, z_m)$ corresponding to the statement "mathematics* is inconsistent" translated to a formula. Thus, if mathematics is consistent, there's no mathematical proof that this polynomial
|logic|diophantine-equations|
1
Why $\sqrt{-1 \cdot {-1}} \neq \sqrt{-1}^2$?
I know there must be something unmathematical in the following but I don't know where it is: \begin{align} \sqrt{-1} &= i \\\\\ \frac1{\sqrt{-1}} &= \frac1i \\\\ \frac{\sqrt1}{\sqrt{-1}} &= \frac1i \\\\ \sqrt{\frac1{-1}} &= \frac1i \\\\ \sqrt{\frac{-1}1} &= \frac1i \\\\ \sqrt{-1} &= \frac1i \\\\ i &= \frac1i \\\\ i^2 &= 1 \\\\ -1 &= 1 \quad !!? \end{align}
Between your third and fourth lines, you use $\frac{\sqrt{a}}{\sqrt{b}}=\sqrt{\frac{a}{b}}$. This is only (guaranteed to be) true when $a\ge 0$ and $b>0$. edit : As pointed out in the comments, what I meant was that the identity $\frac{\sqrt{a}}{\sqrt{b}}=\sqrt{\frac{a}{b}}$ has domain $a\ge 0$ and $b>0$. Outside that domain, applying the identity is inappropriate, whether or not it "works." In general (and this is the crux of most "fake" proofs involving square roots of negative numbers), $\sqrt{x}$ where $x$ is a negative real number ($x This similar question , focused on $-1=i^2=(\sqrt{-1})^2=\sqrt{-1}\sqrt{-1}\overset{!}{=}\sqrt{-1\cdot-1}=\sqrt{1}=1$, is using the similar identity $\sqrt{a}\sqrt{b}=\sqrt{ab}$, which has domain $a\ge 0$ and $b\ge 0$, so applying it when $a=b=-1$ is invalid.
|algebra-precalculus|complex-numbers|arithmetic|fake-proofs|monodromy|
1
Why isn't reflexivity redundant in the definition of equivalence relation?
An equivalence relation is defined by three properties: reflexivity, symmetry and transitivity. Doesn't symmetry and transitivity implies reflexivity? Consider the following argument. For any $a$ and $b$, $a R b$ implies $b R a$ by symmetry. Using transitivity, we have $a R a$. Source: Exercise 8.46, P195 of Mathematical Proofs , 2nd (not 3rd) ed. by Chartrand et al
Actually, without the reflexivity condition, the empty relation would count as an equivalence relation, which is non-ideal. Your argument used the hypothesis that for each $a$, there exists $b$ such that $aRb$ holds. If this is true, then symmetry and transitivity imply reflexivity, but this is not true in general.
|elementary-set-theory|relations|equivalence-relations|
1
Why is the decimal representation of $\frac17$ "cyclical"?
$\frac17 = 0.(142857)$... with the digits in the parentheses repeating. I understand that the reason it's a repeating fraction is because $7$ and $10$ are coprime. But this...cyclical nature is something that is not observed by any other reciprocal of any natural number that I know of (besides multiples of $7$). (if I am wrong, I hope that I may find others through this question) By "cyclical," I mean: 1/7 = 0.(142857)... 2/7 = 0.(285714)... 3/7 = 0.(428571)... 4/7 = 0.(571428)... 5/7 = 0.(714285)... 6/7 = 0.(857142)... Where all of the repeating digits are the same string of digits, but shifted. Not just a simple "they are all the same digits re-arranged", but the same digits in the same order , but shifted. Or perhaps more strikingly, from the wikipedia article : 1 × 142,857 = 142,857 2 × 142,857 = 285,714 3 × 142,857 = 428,571 4 × 142,857 = 571,428 5 × 142,857 = 714,285 6 × 142,857 = 857,142 What is it about the number $7$ in relation to the base $10$ (and its prime factorization $2
For a prime p, the length of the repeating block of $\frac{1}{p}$ is the least positive integer k for which $p|(10^k-1)$. As in mau's answer, $k|(p-1)$, so $k\leq p-1$. When $k=p-1$, then $\frac{1}{p}$ and its multiples behave as discussed in the question. Of the first 100 primes, this is true for 7, 17, 19, 23, 29, 47, 59, 61, 97, 109, 113, 131, 149, 167, 179, 181, 193, 223, 229, 233, 257, 263, 269, 313, 337, 367, 379, 383, 389, 419, 433, 461, 487, 491, 499, 503, 509, 541 (sequence A001913 in OEIS). (List generated in Mathematica using Select[Table[Prime[n], {n, 1, 100}], # - 1 == Length[RealDigits[1/#][[1]][[1]]]&] .)
|number-theory|fractions|decimal-expansion|
1
Construct a bijection from $\mathbb{R}$ to $\mathbb{R}\setminus S$, where $S$ is countable
Two questions: Find a bijective function from $(0,1)$ to $[0,1]$. I haven't found the solution to this since I saw it a few days ago. It strikes me as odd--mapping a open set into a closed set. $S$ is countable. It's trivial to find a bijective function $f:\mathbb{N}\to\mathbb{N}\setminus S$ when $|\mathbb{N}| = |\mathbb{N}\setminus S|$; let $f(n)$ equal the $n^{\text{th}}$ smallest number in $\mathbb{N}\setminus S$. Are there any analogous trivial solutions to $f:\mathbb{R}\to\mathbb{R}\setminus S$?
An explicit bijection $(0,1) \to [0,1]$ for part $1$ is given by: $$f\left(\frac{1}{2}\right) = 0,\quad f\left(\frac{1}{3}\right) = 1,\quad f\left(\frac{1}{n}\right) = \frac{1}{n-2}\ \textrm{for}\ n > 3,$$ $$f(x) = x\ \textrm{for}\ x\ \textrm{not equal to a reciprocal of an integer}$$ For a bijection $\mathbb{R}$ to $\mathbb{R}\setminus S$, we can number the elements of $S$ (because $S$ is countable) as $s_1, s_2, s_3, \dots$ Choose an infinite sequence $t_1, t_2, \dots$ of distinct elements of $\mathbb{R}$, none of which are in $S$. Then define: $$g(s_i) = t_{2i},\quad g(t_i) = t_{2i+1},\quad g(x) = x\ (\textrm{otherwise}).$$ This is a bijection from $\mathbb{R}$ to $\mathbb{R}\setminus S$.
|elementary-set-theory|
0
Applications of the Fibonacci sequence
The Fibonacci sequence is very well known, and is often explained with a story about how many rabbits there are after $n$ generations if they each produce a new pair every generation. Is there any other reason you would care about the Fibonacci sequence?
Perhaps it's not an entirely practical application, but Fibonacci numbers can be used to convert from miles to kilometers and vice versa : Take two consecutive Fibonacci numbers, for example 5 and 8. And you're done converting. No kidding – there are 8 kilometers in 5 miles. To convert back just read the result from the other end - there are 5 miles in 8 km! But why does it work? Fibonacci numbers have a property that the ratio of two consecutive numbers tends to the Golden ratio as numbers get bigger and bigger. The Golden ratio is a number and it happens to be approximately 1.618. Coincidentally, there are 1.609 kilometers in a mile, which is within 0.5% of the Golden ratio.
|combinatorics|big-list|applications|fibonacci-numbers|
1
In how many different ways can I sort balls of two different colors
Let's say, I have 4 yellow and 5 blue balls. How do I calculate in how many different orders I can place them? And what if I also have 3 red balls?
The case of two colors is simple: if you have m yellow balls and n blue ones you only need to choose m positions among (m+n) possibilities, that is (m+n)!/(m!·n!). The other balls' positions are automatically set up.
|combinatorics|
0
Can non-linear transformations be represented as Transformation Matrices?
I just came back from an intense linear algebra lecture which showed that linear transformations could be represented by transformation matrices; with more generalization, it was later shown that affine transformations (linear + translation) could be represented by matrix multiplication as well. This got me to thinking about all those other transformations I've picked up over the past years I've been studying mathematics. For example, polar transformations -- transforming $x$ and $y$ to two new variables $r$ and $\theta$. If you mapped $r$ to the $r$ axis and $\theta$ to the $y$ axis, you'd basically have a coordinate transformation. A rather warped one, at that. Is there a way to represent this using a transformation matrix? I've tried fiddling around with the numbers but everything I've tried to work with has fallen apart quite embarrassingly. More importantly, is there a way to, given a specific non-linear transformation, construct a transformation matrix from it?
No. Everything is determined by a choice of basis. For a more in-depth answer, I would need to explain the first two weeks of linear algebra and draw some commutative diagrams. If you'd like a better explanation, see pages 12-14 of Emil Artin's monograph Geometric Algebra .
|linear-algebra|transformation|
0
Best Intermediate/Advanced Computer Science book
I'm very interested in Computer Science (computational complexity, etc.). I've already finished a University course in the subject (using Sipser's "Introduction to the Theory of Computation"). I know the basics, i.e. Turing Machines, Computability (Halting problem and related reductions), Complexity classes (time and space, P/NP, L/NL, a little about BPP). Now, I'm looking for a good book to learn about some more advanced concepts. Any ideas?
The Art of Computer Programming (Donald Knuth) The legendary book (of multiple volumes, still incomplete) can't go without mention. For learning about algorithms and their complexities, there is no rival. It's written with practicality in mind, though from a largely theoretical perspective. (source: wikimedia.org )
|soft-question|big-list|reference-request|
0
Can non-linear transformations be represented as Transformation Matrices?
I just came back from an intense linear algebra lecture which showed that linear transformations could be represented by transformation matrices; with more generalization, it was later shown that affine transformations (linear + translation) could be represented by matrix multiplication as well. This got me to thinking about all those other transformations I've picked up over the past years I've been studying mathematics. For example, polar transformations -- transforming $x$ and $y$ to two new variables $r$ and $\theta$. If you mapped $r$ to the $r$ axis and $\theta$ to the $y$ axis, you'd basically have a coordinate transformation. A rather warped one, at that. Is there a way to represent this using a transformation matrix? I've tried fiddling around with the numbers but everything I've tried to work with has fallen apart quite embarrassingly. More importantly, is there a way to, given a specific non-linear transformation, construct a transformation matrix from it?
As Harry says, you can't (the example of affine transformations can be tweaked to work because they're just linear ones with the origin translated). However, approximating a nonlinear function by a linear one is something we do all the time in calculus through the derivative, and is what we often have to do to make a mathematical model of some real-world phenomenon tractable.
|linear-algebra|transformation|
0
Applications of the Fibonacci sequence
The Fibonacci sequence is very well known, and is often explained with a story about how many rabbits there are after $n$ generations if they each produce a new pair every generation. Is there any other reason you would care about the Fibonacci sequence?
They appear in pattern formation , for instance in the numbers of petals on a plant, the segements on a pineapple or pinecone, and the structure of nautilus shells. http://library.thinkquest.org/27890/applications5.html http://www.popmath.org.uk/rpamaths/rpampages/sunflower.html http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci/
|combinatorics|big-list|applications|fibonacci-numbers|
0
In how many different ways can I sort balls of two different colors
Let's say, I have 4 yellow and 5 blue balls. How do I calculate in how many different orders I can place them? And what if I also have 3 red balls?
For some reason I find it easier to think in terms of letters of a word being rearranged, and your problem is equivalent to asking how many permutations there are of the word YYYYBBBBB. The formula for counting permutations of words with repeated letters (whose reasoning has been described by Noldorin) gives us the correct answer of 9!/(4!5!) = 126.
|combinatorics|
0
How do proof verifiers work?
I'm currently trying to understand the concepts and theory behind some of the common proof verifiers out there, but am not quite sure on the exact nature and construction of the sort of systems/proof calculi they use. Are they essentially based on higher-order logics that use Henkin semantics, or is there something more to it? As I understand, extending Henkin semantics to higher-order logic does not render the formal system any less sound, though I am not too clear on that. Though I'm mainly looking for a general answer with useful examples, here are a few specific questions: What exactly is the role of type theory in creating higher-order logics? Same goes with category theory/model theory, which I believe is an alternative. Is extending a) natural deduction, b) sequent calculus, or c) some other formal system the best way to go for creating higher order logics? Where does typed lambda calculus come into proof verification? Are there any other approaches than higher order logic to pr
I'll answer just part of your question: I think the other parts will become clearer based on this. A proof verifier is essentially a program that takes one argument, a proof representation, and checks that this is properly constructed, and says OK if it is, and either fails silently otherwise, or highlights what is invalid otherwise. In principle, the proof representation could just be a sequence of formulae in a Hilbert system: all logics (at least, first-orderisable logics) can be represented in such a way. You don't even need to say which rule is specified at each step, since it is decidable whether any formula follows by a rule application from earlier formulae. In practice, though, the proof representations are more complex. Metamath is rather close to Hilbert systems, but has a rich set of rules. Coq and LF use (different) typed lambda calculi with definitions to represent the steps, which are computationally quite expensive to check (IIRC, both are PSPACE hard). And the proof ve
|logic|math-software|theorem-provers|
1
Are there any interesting semigroups that aren't monoids?
Are there any interesting and natural examples of semigroups that are not monoids (that is, they don't have an identity element)? To be a bit more precise, I guess I should ask if there are any interesting examples of semigroups $(X, \ast)$ for which there is not a monoid $(X, \ast, e)$ where $e$ is in $X$ . I don't consider an example like the set of real numbers greater than $10$ (considered under addition) to be a sufficiently 'natural' semigroup for my purposes; if the domain can be extended in an obvious way to include an identity element then that's not what I'm after.
Let $G$ be the set of (continuous) functions $f: \mathbb{R} \to \mathbb{R}$ where $f(x)$ tends to $0$ as $x$ tends to infinity: $lim_{x\to \infty}f(x) = 0$ . The operator is the usual point-wise multiplication of functions. $G$ is closed under $*$ since $\lim f(x) = 0$ and $\lim g(x) = 0$ imply $\lim f(x)*g(x) = 0$ . $G$ is a subgroup of $\{ f: \mathbb{R} \to \mathbb{R} \}$ , so the identity must be the same - the function which is constantly $1$ . But this identity is not in $G$ . EDIT: As Harry correctly points out, $\{ f: \mathbb{R} \to \mathbb{R} \}$ is not a group. Therefore the following correction is needed: consider only functions such that $f(x) \neq 0$ everywhere.
|big-list|abstract-algebra|semigroups|monoid|
0
What is the meaning of the double turnstile symbol ($\models$)?
What's the meaning of the double turnstile symbol in logic or mathematical notation? : $\models$
It is called a 'double turnstile': http://en.wikipedia.org/wiki/Double_turnstile .
|logic|notation|
0
What is the meaning of the double turnstile symbol ($\models$)?
What's the meaning of the double turnstile symbol in logic or mathematical notation? : $\models$
$\models$ is also known as the satisfication relation . For a structure $\mathcal{M}=(M,I)$ and an $\mathcal{M}$ -assignment $\nu$ , $(\mathcal{M},\nu)\models \varphi$ means that the formula $\varphi$ is true with the particular assignment $\nu$ . See http://www.trinity.edu/cbrown/topics_in_logic/struct/node2.html
|logic|notation|
0
Are there any interesting semigroups that aren't monoids?
Are there any interesting and natural examples of semigroups that are not monoids (that is, they don't have an identity element)? To be a bit more precise, I guess I should ask if there are any interesting examples of semigroups $(X, \ast)$ for which there is not a monoid $(X, \ast, e)$ where $e$ is in $X$ . I don't consider an example like the set of real numbers greater than $10$ (considered under addition) to be a sufficiently 'natural' semigroup for my purposes; if the domain can be extended in an obvious way to include an identity element then that's not what I'm after.
Finite sets of matrices of varying dimensions, where the product A*B={PQ|P in A & Q in B & dim(Q)=codim(P)}, and dim & codim are the dimensions of the source & target spaces of a matrix. The infinite case has an obvious unit.
|big-list|abstract-algebra|semigroups|monoid|
0
Good Physical Demonstrations of Abstract Mathematics
I like to use physical demonstrations when teaching mathematics (putting physics in the service of mathematics, for once, instead of the other way around), and it'd be great to get some more ideas to use. I'm looking for nontrivial ideas in abstract mathematics that can be demonstrated with some contraption, construction or physical intuition. For example, one can restate Euler's proof that $\sum \frac{1}{n^2} = \frac{\pi^2}{6}$ in terms of the flow of an incompressible fluid with sources at the integer points in the plane. Or, consider the problem of showing that, for a convex polyhedron whose $i^{th}$ face has area $A_i$ and outward facing normal vector $n_i$, $\sum A_i \cdot n_i = 0$. One can intuitively show this by pretending the polyhedron is filled with gas at uniform pressure. The force the gas exerts on the $i_th$ face is proportional to $A_i \cdot n_i$, with the same proportionality for every face. But the sum of all the forces must be zero; otherwise this polyhedron (conside
The book "The Mathematical Mechanic" by Mark Levi is a very good source of such examples, which Levi has been collecting for some time. The first two here are in the book, if I recall correctly.
|soft-question|big-list|physics|education|
0
Why $\sqrt{-1 \cdot {-1}} \neq \sqrt{-1}^2$?
I know there must be something unmathematical in the following but I don't know where it is: \begin{align} \sqrt{-1} &= i \\\\\ \frac1{\sqrt{-1}} &= \frac1i \\\\ \frac{\sqrt1}{\sqrt{-1}} &= \frac1i \\\\ \sqrt{\frac1{-1}} &= \frac1i \\\\ \sqrt{\frac{-1}1} &= \frac1i \\\\ \sqrt{-1} &= \frac1i \\\\ i &= \frac1i \\\\ i^2 &= 1 \\\\ -1 &= 1 \quad !!? \end{align}
Isaac's answer is correct, but it can be hard to see if you don't have a strong knowledge of your laws. These problems are generally easy to solve if you examine it line by line and simplify both sides. $$\begin{align*} \sqrt{-1} &= i & \mathrm{LHS}&=i, \mathrm{RHS}=i \\ 1/\sqrt{-1} &= 1/i & \mathrm{LHS}&=1/i=-i, \mathrm{RHS}=-i \\ \sqrt{1}/\sqrt{-1} &= 1/i & \mathrm{LHS}&=1/i=-i, \mathrm{RHS}=-i \\ \textstyle\sqrt{1/-1} &= 1/i & \mathrm{LHS}&=\sqrt{-1}=i, \mathrm{RHS}=-i \end{align*}$$ We can then see that the error must be assuming $\textstyle\sqrt{1}/\sqrt{-1}=\sqrt{1/-1}$ .
|algebra-precalculus|complex-numbers|arithmetic|fake-proofs|monodromy|
0
Books on Number Theory for Layman
Books on Number Theory for anyone who loves Mathematics? (Beginner to Advanced & just for someone who has a basic grasp of math)
A Classical Introduction to Modern Number Theory by Ireland and Rosen hands down!
|number-theory|reference-request|soft-question|book-recommendation|
1
Why is the volume of a sphere $\frac{4}{3}\pi r^3$?
I learned that the volume of a sphere is $\frac{4}{3}\pi r^3$ , but why? The $\pi$ kind of makes sense because its round like a circle, and the $r^3$ because it's 3-D, but $\frac{4}{3}$ is so random! How could somebody guess something like this for the formula?
Integration. $$\int 4\pi\cdot r^2 dr = 4/3 \pi r^3.$$
|geometry|volume|solid-geometry|spheres|
0
Software for solving geometry questions
When I used to compete in Olympiad Competitions back in high school, a decent number of the easier geometry questions were solvable by what we called a geometry bash. Basically, you'd label every angle in the diagram with the variable then use a limited set of basic geometry operations to find relations between the elements, eliminate equations and then you'd eventually get the result. It seems like the kind of thing you could program a computer to do. So, I'm curious, does there exist any software to do this? I know there is lots of software for solving equations, but is there anything that lets you actually input a geometry problem without manually converting to equations? I'm not looking for anything too advance, even seeing just an attempt would be interesting. If there is anything decent, I think it'd be rather interesting to run the results on various competitions and see how many of the questions it solves.
This is just a special case of automated theorem proving. A nice thing is some geometry problem can indeed be solved by an algorithm. There are theories show such system can be realized algorithmically. I don't know if anyone have really wrote the program to do it. JGEX seems to do that. A mechanical geometry proof technique was popularized in China by Jingzhong Zhang . He first introduced it as a way for machines to solve geometric problems relating the proportions between areas, lengths or angles. Then some Olympiad people I know start using it to bash that kind of problem. I don't know what the name is in English, but a literal translation of the method is "point removal method". Although it's not exactly same as what you are talking about, because "input a geometry problem" requires you to provide the construction of the problem from a straight edge and a compass, which is almost like "manually converting to equations". the basic idea: Construct the original problem by compass and
|algebraic-geometry|logic|euclidean-geometry|math-software|quantifier-elimination|
1
Looking for a book similar to "Think of a Number"
Many years ago, I had read a book entitled "Think of a Number" by Malcolm E. Lines, and it was an eminently readable and thought provoking book. In the book, there were topics like Fibonacci numbers (along with the live examples from the nature) and Golden Section. Now I'm looking for a similar book. Can anyone recommend me one?
The Number Devil may be something like what you're looking for. Young Robert's dreams have taken a decided turn for the weird. Instead of falling down holes and such, he's visiting a bizarre magical land of number tricks with the number devil as his host. Starting at one and adding zero and all the rest of the numbers, Robert and the number devil use giant furry calculators, piles of coconuts, and endlessly scrolling paper to introduce basic concepts of numeracy, from interesting number sequences to exponents to matrices. (It's not a watered-down kids' book, even though the description might suggest it.)
|reference-request|soft-question|big-list|book-recommendation|
0
List of Interesting Math Blogs
I have the one or other interesting Math blog in my feedreader that I follow. It would be interesting to compile a list of Math blogs that are interesting to read, and do not require research-level math skills. I'll start with my entries: Division By Zero Tanya Khovanova’s Math Blog
I find Annoying Precision to be wonderfully readable, and has many many interesting topics. Additionally, Rigorous Trivialities is a bit higher level, but has a really useful intro to Algebraic Geometry.
|soft-question|big-list|online-resources|
0
Can there be two distinct, continuous functions that are equal at all rationals?
Akhil showed that the Cardinality of set of real continuous functions is the same as the continuum, using as a step the observation that continuous functions that agree at rational points must agree everywhere, since the rationals are dense in the reals. This isn't an obvious step, so why is it true?
Without resorting to ε-δ arguments: Let $f$ and $g$ be continuous real functions and $f(x) = g(x)$ for all rational $x$. For any real number $c$ (in particular, an irrational $c$), there exists a Cauchy sequence of rational numbers such that $\lim_{n \to \infty}x_{n}=c$. Since $f$ and $g$ are continuous, $\lim_{n \to \infty}f({x_{n}})=f({c})$ and $\lim_{n \to \infty}g({x_{n}})=g({c})$. Since $x_n$ is rational, $f(x_n) = g(x_n)$ for all $n$, so the two limits must be equal and so $f(c) = g(c)$ for all real $c$.
|calculus|real-analysis|
1
Card doubling paradox
Suppose there are two face down cards each with a positive real number and with one twice the other. Each card has value equal to its number. You are given one of the cards (with value $x$) and after you have seen it, the dealer offers you an opportunity to swap without anyone having looked at the other card. If you choose to swap, your expected value should be the same, as you still have a $50\%$ chance of getting the higher card and $50\%$ of getting the lower card. However, the other card has a $50\%$ chance of being $0.5x$ and a $50\%$ chance of being $2x$. If we keep the card, our expected value is $x$, while if we swap it, then our expected value is: $$0.5(0.5x)+0.5(2x)=1.25x$$ so it seems like it is better to swap. Can anyone explain this apparent contradiction?
This paradox has always interested me. Something to think about is that there does not exist a uniform probability distribution over the positive real numbers (since they are infinite). In arriving at your paradox, it seems you are assuming that any real number is equally likely, but this cannot be the case.
|paradoxes|probability-theory|
0
Conjectures that have been disproved with extremely large counterexamples?
I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture. I'm sure that everyone here is familiar with it; it describes an operation on a natural number – $n/2$ if it is even, $3n+1$ if it is odd. The conjecture states that if this operation is repeated, all numbers will eventually wind up at $1$ (or rather, in an infinite loop of $1-4-2-1-4-2-1$ ). I fired up Python and ran a quick test on this for all numbers up to $5.76 \times 10^{18}$ (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at $1$ . Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.) I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?" To which I said, "No, you
The first example which came to my mind is the Skewes' number , that is the smallest natural number n for which π(n) > li(n). Wikipedia states that now the limit is near e 727.952 , but the first estimation was much higher.
|big-list|conjectures|big-numbers|
0
Is it true that $0.999999999\ldots=1$?
I'm told by smart people that $$0.999999999\ldots=1$$ and I believe them, but is there a proof that explains why this is?
Okay, I burned a lot of reputation points (at least for me) on MathOverflow to gain clarity on how to give some intuition into this problem, so hopefully, this answer will be at least be somewhat illuminating. To gain a deeper understanding of what is going on, first we need to answer the question, "What is a number?" There are a lot of ways to define numbers, but in general numbers are thought of as symbols that represent sets. This is easy for things like natural numbers. So 10 would correspond to the set with ten things -- like a bag of ten stones. Pretty straightforward. The tricky part is that when we consider ten a subset of the real numbers, we actually redefine it. This is not emphasized even in higher mathematics classes, like real analysis; it just happens when we define real numbers. So what is 10 when constructed in the real numbers? Well, at least with the Dedekind cut version of the real numbers, all real numbers correspond to a set with an infinite amount of elements. Th
|real-analysis|algebra-precalculus|real-numbers|decimal-expansion|
0
Conjectures that have been disproved with extremely large counterexamples?
I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture. I'm sure that everyone here is familiar with it; it describes an operation on a natural number – $n/2$ if it is even, $3n+1$ if it is odd. The conjecture states that if this operation is repeated, all numbers will eventually wind up at $1$ (or rather, in an infinite loop of $1-4-2-1-4-2-1$ ). I fired up Python and ran a quick test on this for all numbers up to $5.76 \times 10^{18}$ (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at $1$ . Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.) I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?" To which I said, "No, you
I heard this story from Professor Estie Arkin at Stony Brook (sorry, I don't know what conjecture she was talking about) : For weeks we tried to prove the conjecture (without success) while we left a computer running looking for counter-examples. One morning we came in to find the computer screen flashing: "Counter-example found" . We all thought that there must have been a bug in the algorithm, but sure enough, it was a valid counter-example. I tell this story to my students to emphasize that "proof by lack of counter-example" is not a proof at all! [Edit] Here was the response from Estie: It is mentioned in our paper: Hamiltonian Triangulations for Fast Rendering E.M. Arkin, M. Held, J.S.B. Mitchell, S.S. Skiena (1994). Algorithms -- ESA'94, Springer-Verlag, LNCS 855, J. van Leeuwen (ed.), pp. 36-47; Utrecht, The Netherlands, Sep 26-28, 1994. Specifically section 4 of the paper, that gives an example of a set of points that does not have a so-called "sequential triangulation" . The p
|big-list|conjectures|big-numbers|
0
Proving that $1$- and $2D$ simple symmetric random walks return to the origin with probability $1$
How does one prove that a simple (steps of length $1$ in directions parallel to the axes) symmetric (each possible direction is equally likely) random walk in $1$ or $2$ dimensions returns to the origin with probability $1$ ? Edit : note that while returning to the origin is guaranteed $(p = 1)$ in $1$ and $2$ dimensions, it is not guaranteed in higher dimensions; this means that something in a correct justification for the $1$ - or $2D$ cases must fail to extend to $3D$ $($ or fail when the probability for each direction drops from $\frac{1}{4}$ to $\frac{1}{6})$ .
I'll do 1D. 1D walks are building binary strings, 010101, etc. Say take six steps. Then 111111 is just as likely as 101010. However, how many of the possible sequences have six ones? 1. How many of the possibly sequences have three ones and three zeros? Much more. That number is called multiplicity, and it grows mighty fast. In the limit its log becomes Shannon entropy. Sequences are equally likely, but combinations are not. In the limit the combinations with maximum entropy are going dominate all the rest. So the walk is going to have gone an equal number of right and left steps...almost surely.
|probability-theory|stochastic-processes|random-walk|symmetry|
0
Proving that $1$- and $2D$ simple symmetric random walks return to the origin with probability $1$
How does one prove that a simple (steps of length $1$ in directions parallel to the axes) symmetric (each possible direction is equally likely) random walk in $1$ or $2$ dimensions returns to the origin with probability $1$ ? Edit : note that while returning to the origin is guaranteed $(p = 1)$ in $1$ and $2$ dimensions, it is not guaranteed in higher dimensions; this means that something in a correct justification for the $1$ - or $2D$ cases must fail to extend to $3D$ $($ or fail when the probability for each direction drops from $\frac{1}{4}$ to $\frac{1}{6})$ .
I can prove the 1 dimensional case a bit more formally than Jonathan . First we only look at the absolute values. Let us try to calculate the probability of this never exceeding x. The probability of 2x+1 consecutive moves all being the same is some p>0. If this ever occurs, then the absolute value will exceed x. Consider n groups of 2x+1 moves, the probability that at least one of these is all the same is 1-(1-p)^n, which approaches 1. So the probability of reaching each absolute value x (other than 0) is 1. Now, lets consider the probability of reaching 0 again. Without loss of generality, suppose our first move is +1. We have a 100% chance of reaching a point a distance of 1 from this. There is a 50% that the first such point is 0 and 50% that it is 2. From 2, we have a 100% chance of reaching a point two away from this. 50% chance that this is 0, 50% that this is 4. Repeating, it is easy to see that we have to reach 0 again. Furthermore, after we have reached an absolute value x, t
|probability-theory|stochastic-processes|random-walk|symmetry|
0
What is a Markov Chain?
What is an intuitive explanation of Markov chains, and how they work? Please provide at least one practical example.
A Markov chain is a discrete random process with the property that the next state depends only on the current state ( wikipedia ) So $P(X_n | X_1, X_2, \dots X_{n-1}) = P(X_n | X_{n-1})$. An example could be when you are modelling the weather. You then can take the assumption that the weather of today can be predicted by only using the knowledge of yesterday. Let's say we have Rainy and Sunny. When it is rainy on one day the next day is Sunny with probability $0.3$. When it is Sunny, the probability for Rain next day is $0.4$. Now when it is today Sunny we can predict the weather of the day after tomorrow, by simply calculating the probability for Rain tomorrow, multiplying that with the probablity for Sun after rain plus the probability of Sun tomorrow times the probability of Sun after sun. In total the probability of Sunny of the day after tomorrow is $P(R|S) \cdot P(S|R) + P(S|S) \cdot P(S|S) = 0.3 \cdot 0.4+0.6 \cdot 0.6 = 0.48$.
|probability-theory|stochastic-processes|terminology|markov-chains|intuition|
1
What is a Markov Chain?
What is an intuitive explanation of Markov chains, and how they work? Please provide at least one practical example.
In a nutshell, a Markov chain is (the behavior of) a random process which may only find itself in a (not necessarily finite) number of different states. The process moves from a state to another in discrete times (that is, you define a sequence S(t) of states at time t=0,1,2,...), and for which the probability of going from state S to state R depends just from S and R; that is, there is no "memory of the past" and the process is "timeless". This means that the Markov chain may be modeled as a n*n matrix, where n is the number of possible states. An example of a process which may be modeled by a Markov chain is the sequence of faces of a die showing up, if you are allowed to rotate the die wrt an edge. The corresponding matrix is 1 2 3 4 5 6 ------------------------ 1 | 0 1/4 1/4 1/4 1/4 0 2 | 1/4 0 1/4 1/4 0 1/4 3 | 1/4 1/4 0 0 1/4 1/4 4 | 1/4 1/4 0 0 1/4 1/4 5 | 1/4 0 1/4 1/4 0 1/4 1 | 0 1/4 1/4 1/4 1/4 0 As usual, Wikipedia and MathWorld are your friends.
|probability-theory|stochastic-processes|terminology|markov-chains|intuition|
0
Why are the only associative division algebras over the real numbers the real numbers, the complex numbers, and the quaternions?
Why are the only (associative) division algebras over the real numbers the real numbers, the complex numbers, and the quaternions? Here a division algebra is an associative algebra where every nonzero number is invertible (like a field, but without assuming commutativity of multiplication). This is an old result proved by Frobenius, but I can't remember how the argument goes. Anyone have a quick proof?
Essentially one first proves that any real division algebra $D$ is a Clifford algebra (i.e. it's generated by elements of some inner product vector space I subject to relations $v^2=\langle v, v\rangle$): first one splits $D$ as $\mathbb R\oplus D_0$ where $D_0$ is the space of elements with $Tr=0$ and then one observes that minimal polynomial of a traceless element has the form $x^2-a=0$ (it's quadratic because it's irreducible and the coefficient of $x$ is zero because it is the trace). Now it remains to find out which Clifford algebras are division algebras which is pretty straightforward (well, and it follows from the classification of Clifford algebras). This proof is written in Wikipedia.
|quaternions|ring-theory|abstract-algebra|
1
Intuitive understanding of the derivatives of $\sin x$ and $\cos x$
One of the first things ever taught in a differential calculus class: The derivative of $\sin x$ is $\cos x$. The derivative of $\cos x$ is $-\sin x$. This leads to a rather neat (and convenient?) chain of derivatives: sin(x) cos(x) -sin(x) -cos(x) sin(x) ... An analysis of the shape of their graphs confirms some points; for example, when $\sin x$ is at a maximum, $\cos x$ is zero and moving downwards; when $\cos x$ is at a maximum, $\sin x$ is zero and moving upwards. But these "matching points" only work for multiples of $\pi/4$. Let us move back towards the original definition(s) of sine and cosine: At the most basic level, $\sin x$ is defined as -- for a right triangle with internal angle $x$ -- the length of the side opposite of the angle divided by the hypotenuse of the triangle. To generalize this to the domain of all real numbers, $\sin x$ was then defined as the Y-coordinate of a point on the unit circle that is an angle $x$ from the positive X-axis. The definition of $\cos x$
From first principles,using trig identities and small-angle approximations: $$\sin'(x) = \lim\limits_{ h\to 0}\frac{\sin(x+h)-\sin(x)}{h}$$ $$\sin(x+h) = \sin(x)\cos(h)+\cos(x)\sin(h)$$ $$\Rightarrow \sin'(x) = \lim\limits_{ h\to 0}\frac{(\sin(x)(\cos(h)-1) + \cos(x)\sin(h))}{h}$$ For $x$ small, $\sin(x)\sim x$, so $$\lim\limits_{ h\to 0}\frac{\sin h}{h}=1$$and $$\cos(x)\sim 1 -\frac {x^2} 2 $$ so $$\lim\limits_{ h\to 0}\frac{\cos h-1}{h}=0$$ $$ \sin'(x) = \cos(x)$$ $$\cos'(x) = \lim\limits_{ h\to 0}\frac{\cos(x+h)-\cos(x)}{h}$$ $$\cos(x+h) = \cos(x)\cos(h) - \sin(x)\sin(h)$$ $$\Rightarrow \cos'(x) = \lim\limits_{h\to0}\frac{\cos(x)(\cos(h)-1) - \sin(x)\sin(h)}{h}$$ $$= -\sin(x)$$ by the same reasoning above.
|calculus|trigonometry|
0
Explanation of method for showing that $\frac{0}{0}$ is undefined
(This was asked due to the comments and downvotes on this Stackoverflow answer. I am not that good at maths, so was wondering if I had made any basic mistakes) Ignoring limits, I would like to know if this is a valid explanation for why $\frac00$ is undefined: $x = \frac00$ $x \cdot 0 = 0$ Hence There are an infinite number of values for $x$ as anything multiplied by $0$ is $0$. However, it seems to have got comments, with two general themes. Once is that you lose the values of $x$ by multiplying by $0$. The other is that the last line is: $x \cdot 0 = \frac00 \cdot 0$ as it involves a division by $0$. Is there any merit to either argument? More to the point, are there any major flaws in my explanation and is there a better way of showing why $\frac00$ is undefined?
For "all" x, $$\frac0x = 0 \overset{?}{\implies} \frac00 = 0$$ For "all" x, $$\frac x x = 1 \overset{?}{\implies} \frac00 = 1$$ Moreover, if one could say $\frac00 = k, \forall k$, we could then say $2 = 3$ — just divide both sides by 0 and get $k = k$, which is patently true. Since there is no reasonable value $\frac00$ can have, $\frac00$ must be undefined.
|algebra-precalculus|
0
Proving that $1$- and $2D$ simple symmetric random walks return to the origin with probability $1$
How does one prove that a simple (steps of length $1$ in directions parallel to the axes) symmetric (each possible direction is equally likely) random walk in $1$ or $2$ dimensions returns to the origin with probability $1$ ? Edit : note that while returning to the origin is guaranteed $(p = 1)$ in $1$ and $2$ dimensions, it is not guaranteed in higher dimensions; this means that something in a correct justification for the $1$ - or $2D$ cases must fail to extend to $3D$ $($ or fail when the probability for each direction drops from $\frac{1}{4}$ to $\frac{1}{6})$ .
I'll try it for 2D and then we can get 1D as a corollary [excercise!]... This is the only proof I know of, there may be a more intuitive (and less messy without tex!) proof out there, but I like this one- it uses generating functions in a really nifty way. Consider the probability of being at the origin after 2n steps (notice we cannot return in an odd number of steps): $u_0=1$ $u_{2n} = p(n,n,0,0)+ p(n-1,n-1,1,1)+...+p(n-k,n-k,k,k)+...+p(0,0,n,n)$ (when $n\neq0$) Here $p(u,d,l,r)$ is the probability of the first 2n steps being u up, d down, l left and r right in any order. Each order has probability $\frac{1}{4^{2n}}$, and there are $\frac{(2n)!}{u!d!l!r!}$ distinct orders, giving $p(n-k,n-k,k,k)=\frac{1}{4^{2n}} \frac{(2n)!}{(n-k)!(n-k)!k!k!}$ Now, since $(2n)!=n! n! \binom{2n}{n}$ we have $p(n-k,n-k,k,k)=\frac{1}{4^{2n}} \binom{2n}{n} \binom{n}{k}^2$ giving $u_{2n}= \frac{1}{4^{2n}} \Sigma_k \binom{2n}{n} \binom{n}{k}^2$ which, by one of those silly binomial results, can be contract
|probability-theory|stochastic-processes|random-walk|symmetry|
0
Conjectures that have been disproved with extremely large counterexamples?
I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture. I'm sure that everyone here is familiar with it; it describes an operation on a natural number – $n/2$ if it is even, $3n+1$ if it is odd. The conjecture states that if this operation is repeated, all numbers will eventually wind up at $1$ (or rather, in an infinite loop of $1-4-2-1-4-2-1$ ). I fired up Python and ran a quick test on this for all numbers up to $5.76 \times 10^{18}$ (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at $1$ . Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.) I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?" To which I said, "No, you
For an old example, Mersenne made the following conjecture in 1644: The Mersenne numbers, $M_n=2^n − 1$ , are prime for n = 2, 3, 5, 7, 13, 17, 19, 31, 67, 127 and 257, and no others. Pervushin observed that the Mersenne number at $M_{61}$ is prime, so refuting the conjecture. $M_{61}$ is quite large by the standards of the day: 2 305 843 009 213 693 951. According to Wikipedia, there are 51 known Mersenne primes as of 2023.
|big-list|conjectures|big-numbers|
0
Mathematical subjects you wish you learned earlier
I am learning geometric algebra, and it is incredible how much it helps me understand other branches of mathematics. I wish I had been exposed to it earlier. Additionally I feel the same way about enumerative combinatorics. What are some less popular mathematical subjects that you think should be more popular?
Statistics is the topic in which I am still poor and it is still useful to me which I learned so late and that's why I am poor in Statistics.
|soft-question|learning|
0
Classifying Quasi-coherent Sheaves on Projective Schemes
I know some references where I can find this, but they seem tedious. Both Hartshorne and Ueno cover this. I am wondering if there is an elegant way to describe these. If this task is too difficult in general, how about just $\mathbb{P}^n$ ? Thanks!
Quasi-coherent sheaves on affine schemes (say $Spec(A)$) are obtained by taking an $A$-module $M$ and the associated sheaf (by localizing $M$). This gives an equivalence of categories between $A$-modules and q-c sheaves on $Spec(A)$. Let $R$ be a graded ring, $R = R_0 + R_1 + \dots$ (direct sum). Then we can, given a graded $R$-module $M$, consider its associated sheaf $\tilde{M}$. The stalk of this at a homogeneous prime ideal $P$ is defined to be the localization $M_{(P)}$, which is defined as generated by quotients $m/s$ for $s$ homogeneous of the same degree as $m$ and not in $P$. In short, we get sheaves of modules on the affine scheme just as we get the normal sheaves of rings. We get sheaves of modules on the projective scheme in the same homogeneous localization way as we get the sheaf of rings. However, it's no longer an equivalence of categories. Why? Say you had a graded module $M= M_0 + M_1 + \dots$ (in general, we allow negative gradings as well). Then it is easy to check
|algebraic-geometry|projective-schemes|quasicoherent-sheaves|projective-space|
1
Proving that $1$- and $2D$ simple symmetric random walks return to the origin with probability $1$
How does one prove that a simple (steps of length $1$ in directions parallel to the axes) symmetric (each possible direction is equally likely) random walk in $1$ or $2$ dimensions returns to the origin with probability $1$ ? Edit : note that while returning to the origin is guaranteed $(p = 1)$ in $1$ and $2$ dimensions, it is not guaranteed in higher dimensions; this means that something in a correct justification for the $1$ - or $2D$ cases must fail to extend to $3D$ $($ or fail when the probability for each direction drops from $\frac{1}{4}$ to $\frac{1}{6})$ .
See Durrett, Probability: Theory and Examples (link goes to online copy of the fourth edition; original defunct link ). On p. 164 Durrett gives a proof that simple random walk is recurrent in two dimensions. First find the probability that simple random walk in one dimension is at $0$ after $2n$ steps; this is clearly $\rho_1(2n) = \binom{2n}{n}/2^{2n}$ , since $\binom{2n}{n}$ is the number of paths with $n$ right steps and $n$ left steps. Next, the probability that simple random walk in two dimensions -- call this $\rho_2(2n)$ -- is at $0$ after $2n$ steps is the square of the previous probability. Consider the simple random walk which makes steps to the northeast, northwest, southeast, and southwest with equal probability. The projections of this walk onto the x- and y-axes are independent simple random walks in one dimension. Rotating and rescaling gives the "usual" SRW in two dimensions (with steps north, east, south and west) and doesn't change the probability of being at $0$ . So
|probability-theory|stochastic-processes|random-walk|symmetry|
0
Applications of the "soft maximum"
There is a little triviality that has been referred to as the "soft maximum" over on John Cook's Blog that I find to be fun, at the very least. The idea is this: given a list of values, say $x_1,x_2,\ldots,x_n$ , the function $g(x_1,x_2,\ldots,x_n) = \log(\exp(x_1) + \exp(x_2) + \cdots + \exp(x_n))$ returns a value very near the maximum in the list. This happens because that exponentiation exaggerates the differences between the $x_i$ values. For the largest $x_i$, $\exp(x_i)$ will be $really$ large. This largest exponential will significantly outweigh all of the others combined. Taking the logarithm, i.e. undoing the exponentiation, we essentially recover the largest of the $x_i$'s. (Of course, if two of the values were very near one another, we aren't guaranteed to get the true maximum, but it won't be far off!) About this, John Cook says: "The soft maximum approximates the hard maximum but it also rounds off the corners." This couldn't really be said any better. I recall trying to c
This is close to being the flipside of the geometric mean, which is the nth root of the product of the numbers, and can be expressed as the exponential of the sum of the logarithms. Another pair of dual mean measures is the regular mean and the harmonic mean (n divided by the sum of the reciprocals). I say the soft maximum is close to being the flipside of the geometric mean, but it lacks the good property that all of the others have of taking a list of the same value to that value (for definedness, let all values be positive). Let's call the hyperbolic mean the "soft maximum" of the nth roots of the terms in the list: then this has that good property. The hyperbolic mean the emphasises large values in a roughly symmetric manner to the way that the geometric mean emphasises small values (which is always smaller than the regular mean), and is, of course, much smaller for long list of large values. So I say, consider it an amplified version of a useful addition to the family of of mean o
|numerical-methods|analysis|
1
What are Your Favourite Maths Puzzles?
We all love a good puzzle To a certain extent, any piece of mathematics is a puzzle in some sense: whether we are classifying the homological intersection forms of four manifolds or calculating the optimum dimensions of a cylinder, it is an element of investigation and inherently puzzlish intrigue that drives us. Indeed most puzzles (cryptic crosswords aside) are somewhat mathematical (the mathematics of Sudoku for example is hidden in latin squares ). Mathematicians and puzzles get on, it seems, rather well. But what is a good puzzle? Okay, so in order to make this question worthwhile (and not a ten-page wade-athon through 57 varieties of the men with red and blue hats puzzle), we are going to have to impose some limitations. Not every puzzle-based answer that pops into your head will qualify as a good puzzle—to do so it must Not be widely known: If you have a terribly interesting puzzle that motivates something in cryptography; well done you, but chances are we've seen it. If you saw
From Mathematical Puzzles by Peter Winkler: Divide an hexagon in equilateral triangles, like in the figure. Now fill all the hexagon with the three kinds of diamonds made from two triangles, also shown in the figure. Prove that the number of each kind of diamond is the same.
|soft-question|recreational-mathematics|puzzle|big-list|
0
What are Your Favourite Maths Puzzles?
We all love a good puzzle To a certain extent, any piece of mathematics is a puzzle in some sense: whether we are classifying the homological intersection forms of four manifolds or calculating the optimum dimensions of a cylinder, it is an element of investigation and inherently puzzlish intrigue that drives us. Indeed most puzzles (cryptic crosswords aside) are somewhat mathematical (the mathematics of Sudoku for example is hidden in latin squares ). Mathematicians and puzzles get on, it seems, rather well. But what is a good puzzle? Okay, so in order to make this question worthwhile (and not a ten-page wade-athon through 57 varieties of the men with red and blue hats puzzle), we are going to have to impose some limitations. Not every puzzle-based answer that pops into your head will qualify as a good puzzle—to do so it must Not be widely known: If you have a terribly interesting puzzle that motivates something in cryptography; well done you, but chances are we've seen it. If you saw
unknown source: Could the plane be colored with two different colors (say, red and blue) so that there is no equilateral triangle whose vertices are all of the same color?
|soft-question|recreational-mathematics|puzzle|big-list|
0
What is the optimum angle of projection when throwing a stone off a cliff?
You are standing on a cliff at a height $h$ above the sea. You are capable of throwing a stone with velocity $v$ at any angle $a$ between horizontal and vertical. What is the value of $a$ when the horizontal distance travelled $d$ is at a maximum? On level ground, when $h$ is zero, it's easy to show that $a$ needs to be midway between horizontal and vertical, and thus $\large\frac{\pi}{4}$ or $45°$. As $h$ increases, however, we can see by heuristic reasoning that $a$ decreases to zero, because you can put more of the velocity into the horizontal component as the height of the cliff begins to make up for the loss in the vertical component. For small negative values of $h$ (throwing up onto a platform), $a$ will actually be greater than $45°$. Is there a fully-solved, closed-form expression for the value of $a$ when $h$ is not zero?
I don't have a complete solution, but I attempted to solve this problem using calculus. $x'=v \cos a$ $y''= -g$ and (at $t=0) \quad y'= v \sin a$ So, $y'= v \sin a -gt$ $x_0=0$, so $x=vt \cos a$ $y_0=h$, so $y=vt \sin a - \frac12 gt^2+c$ (integrating with respect to $t$) Subbing in $h, y=vt \sin a - \frac12 gt^2+h$ The ball will hit the ground when $y=0$. This is as far as I got, but it appears that you can find a closed solution after all. I originally tried solving the quadratic for $t$ and subbing that into $x$, but it seems to work much better to do the substitution the other way round. I will leave this solution here in case anyone wants to see how to derive the basic equations for $x$ and $y$.
|calculus|trigonometry|physics|
0
What is the optimum angle of projection when throwing a stone off a cliff?
You are standing on a cliff at a height $h$ above the sea. You are capable of throwing a stone with velocity $v$ at any angle $a$ between horizontal and vertical. What is the value of $a$ when the horizontal distance travelled $d$ is at a maximum? On level ground, when $h$ is zero, it's easy to show that $a$ needs to be midway between horizontal and vertical, and thus $\large\frac{\pi}{4}$ or $45°$. As $h$ increases, however, we can see by heuristic reasoning that $a$ decreases to zero, because you can put more of the velocity into the horizontal component as the height of the cliff begins to make up for the loss in the vertical component. For small negative values of $h$ (throwing up onto a platform), $a$ will actually be greater than $45°$. Is there a fully-solved, closed-form expression for the value of $a$ when $h$ is not zero?
Assume no friction and uniform gravity g . If you throw a stone at point (0, h ), with velocity ( v cos θ , v sin θ ), then we get \begin{align} d &= vt\cos\theta && (1) \\ 0 &= h + vt\sin\theta - \frac12 gt^2 && (2) \end{align} The only unknown to be solved is t (total travel time). We could eliminate it by using $t = \frac d{v\cos\theta}$ to get $$ 0 = h + d\tan\theta - \frac{gd^2\sec^2\theta}{2v^2}\qquad(3) $$ Then we compute the total derivative with respect to θ : \begin{align} 0 &= \frac d{d\theta}\left(d\tan\theta\right) - \frac g{2v^2}\frac d{d\theta}\left(d^2\sec^2\theta\right) \\ &= \ldots \end{align} and then set $\frac{dd}{d\theta}=0$ (because it is maximum) to solve d : $$ d = \frac{v^2}{g\tan\theta} $$ Substitute this back to (3) gives: \begin{align} h &= \frac{v^2}g \left( \frac1{2\sin^2\theta} - 1\right) \\ \Rightarrow \sin\theta &= \left( 2 \left(\frac{gh}{v^2} + 1\right) \right)^{-1/2} \end{align} This is the closed form of θ in terms of h .
|calculus|trigonometry|physics|
1
Varying definitions of cohomology
So I know that given a chain complex we can define the $d$-th cohomology by taking $\ker{d}/\mathrm{im}_{d+1}$. But I don't know how this corresponds to the idea of holes in topological spaces (maybe this is homology, I'm a tad confused).
Edited to clear some things up: Simplicial and singular (co)homology were invented to detect holes in spaces. To get an intuitive idea of how this works, consider subspaces of the plane. Here the 2-chains are formal sums of things homeomorphic to the closed disk, and 1-chains are formal sums of things homeomorphic to a line segment. The operator d takes the boundary of a chain. For example, the boundary of the closed disk is a circle. If we take d of the circle we get $0$ since a circle has no boundary. And in general it happens that $d^2 = 0$ , that is boundaries always have no boundaries themselves. Now suppose we remove the origin from the plane and take a circle around the origin. This circle is in the kernel of d since it has no boundary. However, it does not bound any 2-chain in the space (since the origin is removed) and so it is not in the image of the boundary operator on two-dimensions. Thus the circle represents a non-trivial element in the quotient space $\ker( d ) / \opera
|homology-cohomology|algebraic-topology|
1
What are Your Favourite Maths Puzzles?
We all love a good puzzle To a certain extent, any piece of mathematics is a puzzle in some sense: whether we are classifying the homological intersection forms of four manifolds or calculating the optimum dimensions of a cylinder, it is an element of investigation and inherently puzzlish intrigue that drives us. Indeed most puzzles (cryptic crosswords aside) are somewhat mathematical (the mathematics of Sudoku for example is hidden in latin squares ). Mathematicians and puzzles get on, it seems, rather well. But what is a good puzzle? Okay, so in order to make this question worthwhile (and not a ten-page wade-athon through 57 varieties of the men with red and blue hats puzzle), we are going to have to impose some limitations. Not every puzzle-based answer that pops into your head will qualify as a good puzzle—to do so it must Not be widely known: If you have a terribly interesting puzzle that motivates something in cryptography; well done you, but chances are we've seen it. If you saw
Most of us know that, being deterministic, computers cannot generate true random numbers . However, let's say you have a box which generates truly random binary numbers, but is biased: it's more likely to generate either a 1 or a 0 , but you don't know the exact probabilities, or even which is more likely (both probabilities are > 0 and sum to 1, obviously) Can you use this box to create an unbiased random generator of binary numbers?
|soft-question|recreational-mathematics|puzzle|big-list|
0
What are Your Favourite Maths Puzzles?
We all love a good puzzle To a certain extent, any piece of mathematics is a puzzle in some sense: whether we are classifying the homological intersection forms of four manifolds or calculating the optimum dimensions of a cylinder, it is an element of investigation and inherently puzzlish intrigue that drives us. Indeed most puzzles (cryptic crosswords aside) are somewhat mathematical (the mathematics of Sudoku for example is hidden in latin squares ). Mathematicians and puzzles get on, it seems, rather well. But what is a good puzzle? Okay, so in order to make this question worthwhile (and not a ten-page wade-athon through 57 varieties of the men with red and blue hats puzzle), we are going to have to impose some limitations. Not every puzzle-based answer that pops into your head will qualify as a good puzzle—to do so it must Not be widely known: If you have a terribly interesting puzzle that motivates something in cryptography; well done you, but chances are we've seen it. If you saw
Assuming you have unlimited time and cash, is there a strategy that's guaranteed to win at roulette?
|soft-question|recreational-mathematics|puzzle|big-list|
0
What are Your Favourite Maths Puzzles?
We all love a good puzzle To a certain extent, any piece of mathematics is a puzzle in some sense: whether we are classifying the homological intersection forms of four manifolds or calculating the optimum dimensions of a cylinder, it is an element of investigation and inherently puzzlish intrigue that drives us. Indeed most puzzles (cryptic crosswords aside) are somewhat mathematical (the mathematics of Sudoku for example is hidden in latin squares ). Mathematicians and puzzles get on, it seems, rather well. But what is a good puzzle? Okay, so in order to make this question worthwhile (and not a ten-page wade-athon through 57 varieties of the men with red and blue hats puzzle), we are going to have to impose some limitations. Not every puzzle-based answer that pops into your head will qualify as a good puzzle—to do so it must Not be widely known: If you have a terribly interesting puzzle that motivates something in cryptography; well done you, but chances are we've seen it. If you saw
Frk n th rd 1 Y'r n pth n n slnd, cme t frk n th rd. Bth pths ld t vllgs f ntvs; th ntr vllg thr lwys tlls th trth r lwys ls (bth villgs cld b trth-tllng r lyng vllgs, r n f ch) . Thr r tw ntvs t th frk - thy cld bth b frm th sm vllg, r frm dffrnt vllgs (s bth cld b trth-tllrs, both lrs, r ne f ch) . n pth lds t sfty, th thr t dm. Y'r llwd t sk nly n qstn t ch ntv t fgr t whch pth s whch. Wht d y sk?
|soft-question|recreational-mathematics|puzzle|big-list|
0
Paradox: increasing sequence that goes to $0$?
It is $10$ o'clock, and I have a box. Inside the box is a ball marked $1$. At $10$:$30$, I will remove the ball marked $1$, and add two balls, labeled $2$ and $3$. At $10$:$45$, I will remove the balls labeled $2$ and $3$, and add $4$ balls, marked $4$, $5$, $6$, and $7$. $7.5$ minutes before $11$, I will remove the balls labeled $4$, $5$, and $6$, and add $8$ balls, labeled $8$, $9$, $10$, $11$, $12$, $13$, $14$, and $15$. This pattern continues. Each time I reach the halfway point between my previous action and $11$ o'clock, I add some balls, and remove some other balls. Each time I remove one more ball than I removed last time, but add twice as many balls as I added last time. The result is that as it gets closer and closer to $11$, the number of balls in the box continues to increase. Yet every ball that I put in was eventually removed. So just how many balls will be in the box when the clock strikes $11$? $0$, or infinitely many? What's going on here?
I've always thought this sort of puzzle is a wonderful example of the difference between behaviour as you approach a limit, versus behaviour AT the limit. You can make the puzzle more revealing like this: Say I have an infinite number of banknotes, each with a unique serial number. I give you bill number 1. Now, you have two options, either you keep that bill and the game ends, or I give you 10 more banknotes, but you have to burn the lowest numbered bill. It seems obvious that option two is much better. Now, we repeat this game over and over. You keep getting more and more money. But at the limit, every banknote has been burned and you're worse off than you would have been had you just taken the one banknote and left.
|soft-question|paradoxes|
0
What functions can be represented as power series?
How do we know if a particular function can be represented as a power series? And once we have come up with a power series representation, how does one figure out its radius of convergence ?
This is a very general question, as one can create all sorts of power series for different functions. (e.g. Taylor series, Laurent series, Fourier series). To give the obvious example of Taylor series: a power series representation of a function can be found if the function is infinitely differentiable in the neighbourhood of the given point. With all power series, you will need to find the recursion relation (formula giving a successive term from the current term) and then use the ratio test to solve for the value of the input variable that gives a ratio of convergence of 1.
|calculus|complex-analysis|sequences-and-series|
0
Has anyone ever proposed additional axioms?
According to Wikipedia, Godel's incompleteness theorem states: No consistent system of axioms whose theorems can be listed by an "effective procedure" (essentially, a computer program) is capable of proving all facts about the natural numbers. This obviously includes our current system. So has anyone proposed any additional axioms that seem credible?
Sure, the continuum hypothesis . We know, thanks to the work of Godel and Paul Cohen, that this is a legitimate axiom, in that it doesn't destroy consistency: you can't disprove it (or prove it) using Zermelo-Frank set theory. People do actually use the continuum hypothesis in model theory, for instance, where there are several results that depend on it. For an article about its uses, cf. the Tricki page .
|axioms|
0
What is the most efficient way to determine if a matrix is invertible?
I'm learning Linear Algebra using MIT's Open Courseware Course 18.06 Quite often, the professor says "... assuming that the matrix is invertible ...". Somewhere in the lecture he says that using a determinant on an $n \times n$ matrix is on the order of $O(n!)$ operations, where an operation is a multiplication and a subtraction. Is there a more efficient way? If the aim is to get the inverse, rather than just determine the invertibility, what is the most effecient way to do this?
Gauss-Jordan elimination can be used to determine when a matrix is invertible and can be done in polynomial (in fact, cubic) time. The same method (when you apply the opposite row operation to identity matrix) works to calculate the inverse in polynomial time as wel.
|linear-algebra|matrices|
1
How do you prove that a group specified by a presentation is infinite?
The group: $$ G = \left\langle x, y \; \left| \; x^2 = y^3 = (xy)^7 = 1\right. \right\rangle $$ is infinite, or so I've been told. How would I go about proving this? (To prove finiteness of a finitely presented group, I could do a coset enumeration, but I don't see how this helps if I want to prove that it's infinite.)
$\langle x,y \; | \; x^2=y^3=1 \rangle \cong \operatorname{PSL}_2(\mathbb Z)$ and this isomorphism identifies G with $\operatorname{PSL}_2/T^7=1$ (where $T:z\mapsto z+1$). Result is the symmetry group of the tiling of the hyperbolic plane. From this description one can see that G is infinite (e.g. because there are infinitely many triangles in the tiling and G acts on them transitively).
|group-theory|group-presentation|geometric-group-theory|infinite-groups|combinatorial-group-theory|
1
Fermat's Two Square Theorem: How do you prove that an odd prime is the sum of two squares iff it is congruent to 1 mod 4?
It is a theorem in elementary number theory that if $p$ is a prime and congruent to 1 mod 4, then it is the sum of two squares. Apparently there is a trick involving arithmetic in the gaussian integers that lets you prove this quickly. Can anyone explain it?
Let $p$ be a prime congruent to 1 mod 4. Then to write $p = x^2 + y^2$ for $x,y$ integers is the same as writing $p = (x+iy)(x-iy) = N(x+iy)$ for $N$ the norm. It is well-known that the ring of Gaussian integers $\mathbb{Z}[i]$ is a principal ideal domain, even a euclidean domain. Now I claim that $p$ is not prime in $\mathbb{Z}[i]$. To determine how a prime $p$ of $\mathbb{Z}$ splits in $\mathbb{Z}[i]$ is equivalent to determining how the polynomial $X^2+1$ splits modulo $p$. First off, $-1$ is a quadratic residue modulo $p$ because $p \equiv 1 \mod 4$. Consequently, there is $t \in \mathbb{Z}$ with $t^2 \equiv -1 \mod p$, so $X^2+1$ splits modulo $p$, and $p$ does not remain prime in $\mathbb{Z}[i]$. (Another way of seeing this is to note that if $p$ remained prime, then we'd have $p \mid (t+i)(t-i)$, which means that $p \mid t+i$ or $t \mid t-i$.) Anyway, as a result there is a non-unit $x+iy$ of $\mathbb{Z}[i]$ that properly divides $p$. This means that the norms properly divide as
|elementary-number-theory|prime-numbers|
0
What is the most efficient way to determine if a matrix is invertible?
I'm learning Linear Algebra using MIT's Open Courseware Course 18.06 Quite often, the professor says "... assuming that the matrix is invertible ...". Somewhere in the lecture he says that using a determinant on an $n \times n$ matrix is on the order of $O(n!)$ operations, where an operation is a multiplication and a subtraction. Is there a more efficient way? If the aim is to get the inverse, rather than just determine the invertibility, what is the most effecient way to do this?
A matrix is invertible iff its determinant is non-zero. There are algorithms which find the determinant in slightly worse than O(n 2 )
|linear-algebra|matrices|
0
Counting how many hands of cards use all four suits
From a standard $52$-card deck, how many ways are there to pick a hand of $k$ cards that includes one card from all four suits? I know that for any specific $k$, it's possible to break it up into cases based on the partitions of $k$ into $4$ parts. For example, if I want to choose a hand of six cards, I can break it up into two cases based on whether there are $(1)$ three cards from one suit and one card from each of the other three or $(2)$ two cards from each of two suits and one card from each of the other two. Is there a simpler, more general solution that doesn't require splitting the problem into many different cases?
Count the number of hands that do not contain at least one card from every suit and subtract from the total number of k-card hands. To count the number of hands that do not contain at least one card from every suit, use inclusion-exclusion considering what suits are not in a given hand. That is, letting $N(\dots)$ mean the number of hands meeting the given criteria, $$\begin{align} &N(\mathrm{no\ }\heartsuit)+N(\mathrm{no\ }\spadesuit)+N(\mathrm{no\ }\clubsuit)+N(\mathrm{no\ }\diamondsuit) \\ &\quad\quad-N(\mathrm{no\ }\heartsuit\spadesuit)-N(\mathrm{no\ }\heartsuit\clubsuit)-N(\mathrm{no\ }\heartsuit\diamondsuit)-N(\mathrm{no\ }\spadesuit\clubsuit)-N(\mathrm{no\ }\spadesuit\diamondsuit)-N(\mathrm{no\ }\clubsuit\diamondsuit) \\ &\quad\quad+N(\mathrm{no\ }\heartsuit\spadesuit\clubsuit)+N(\mathrm{no\ }\heartsuit\spadesuit\diamondsuit)+N(\mathrm{no\ }\heartsuit\clubsuit\diamondsuit)+N(\mathrm{no\ }\spadesuit\clubsuit\diamondsuit) \\ &\quad\quad-N(\mathrm{no\ }\heartsuit\spadesuit\clubsuit
|combinatorics|card-games|
1
Given enough time, what are the chances I can come out ahead in a coin toss contest?
Assuming I can play forever, what are my chances of coming out ahead in a coin flipping series? Let's say I want "heads"...then if I flip once, and get heads, then I win, because I've reached a point where I have more heads than tails (1-0). If it was tails, I can flip again. If I'm lucky, and I get two heads in a row after this, this is another way for me to win (2-1). Obviously, if I can play forever, my chances are probably pretty decent. They are at least greater than 50%, since I can get that from the first flip. After that, though, it starts getting sticky. I've drawn a tree graph to try to get to the point where I could start see the formula hopefully dropping out, but so far it's eluding me. Your chances of coming out ahead after 1 flip are 50%. Fine. Assuming you don't win, you have to flip at least twice more. This step gives you 1 chance out of 4. The next level would be after 5 flips, where you have an addtional 2 chances out of 12, followed by 7 flips, giving you 4 out of
100%, for the same reason as the 1-D walk In fact (again for the same reason), your chances are 100% of eventually reaching X-greater heads than tails (or tails than heads), where X is any non-negative integer.
|probability|
1
Circular permutations with indistinguishable objects
Given n distinct objects, there are $n!$ permutations of the objects and $n!/n$ "circular permutations" of the objects (orientation of the circle matters, but there is no starting point, so $1234$ and $2341$ are the same, but $4321$ is different). Given $n$ objects of $k$ types (where the objects within each type are indistinguishable), $r_i$ of the $i^{th}$ type, there are \begin{equation*} \frac{n!}{r_1!r_2!\cdots r_k!} \end{equation*} permutations. How many circular permutations are there of such a set?
This problem is best solved with Pólya's enumeration theorem, which follows from Burnside's lemma. See the first section of this Wikipedia article .
|combinatorics|
0
Which books would you recommend about Recreational Mathematics?
By this I mean books with math puzzles and problems similar to the ones you would find in mathematical olympiads.
If you're after Olympiad-level books, get The IMO Compendium which is a collection of problems from the International Math Olympiad, 1959-2004. You can find similar books with national Olympiad problems by going to Amazon and searching for "mathematical Olympiad". Two books that offer collections of techniques useful for olympiad-level contests are Paul Zeitz's The Art and Craft of Problem Solving and Arthur Engel's Problem Solving Strategies . There are lots of other books with similar titles and descriptions. Just follow Google Books's suggestions.
|soft-question|big-list|reference-request|recreational-mathematics|
1
What are some applications outside of mathematics for algebraic geometry?
Are there any results from algebraic geometry that have led to an interesting "real world" application?
Broadly speaking, algebraic geometry is used a lot in some areas of robotics and mechanical engineering. Real algebraic geometry, for example, is important to the development of CAD systems (think NURBS, computing intersections of primitives, etc.) And AG comes up in robotics when it is important to figure out, say, what motions a robotic arm in a given configuration is capable of, or to construct some kind of linkage that draws a prescribed curve. Something specific in that vein: Kempe's Universality Theorem gives that any bounded algebraic curve in $\mathbb{R}^2$ is the locus of some linkage. The "locus of a linkage" being the path drawn out by all the vertices of a graph, where the edge lengths are all specified and one or more vertices remains still. Interestingly, Kempe's orginal proof of the theorem was flawed, and more recent proofs have been more involved. However, Timothy Abbott's MIT masters thesis gives a simpler proof that gives a working linkage for a given curve, and make
|algebraic-geometry|
0
What are some applications outside of mathematics for algebraic geometry?
Are there any results from algebraic geometry that have led to an interesting "real world" application?
The following slideshow gives an explanation of how algebraic geometry can be used in phylogenetics. See also this post of Charles Siegel on Rigorous Trivialties. This is not an area I've looked at in much detail at all, but it appears that the idea is to use a graph to model evolutionary processes, and such that the "transition function" for these processes is given by a polynomial map. In particular, it'd be of interest to look at the potential outcomes, namely the image of the transition function; that corresponds to the image of a polynomial map (which is not necessarily an algebraic variety, but it is a constructible set, so not that badly behaved either). (In practice, though, it seems that one studies the closure, which is a legitimate algebraic set.)
|algebraic-geometry|
1
How to determine annual payments on a partially repaid loan?
Question : A $10$ -year loan of $\$500$ is repaid with payments at the end of each year. The lender charges interest at an annual effective rate of $10\%$ . Each of the first ten payments is $150\%$ of the amount of interest due. Each of the last ten payments is $X$ . Calculate $X$ . My Attempt $\$ 500$ will earn $\$50$ interest each year, so each of the first $10$ payments must be $\$75$ . Then after $10$ years, a total of $750$ has been repaid. In $10$ years, I can find the accumulated debt by saying PV=500 I/Y=10 N=10 giving me FV= $\$1296.87$ . So the balance would be $\$ 1296.87-\$ 750=\$ 546.97$ . Now I am stuck! How do I find out what the last payments should be? I know I can't just divide $\$ 546.97/10=\$ 54.697$ , because the lender is still charging interest while the borrower pays off this remaining debt, so there would still be the interest left over. This situation isn't mentioned anywhere in my calculator manual! Can one of you give me some explanation about what is going
This problem is in two stages. For the first stage, notice that you are paying 150% interest, but ending up owing more. This is because you subtracted $\$ $750 from the future value, when in fact each $\$ $75 amount was paid at a time in the past and needs be converted to a future value too. The payment in the nth year has a future value of $75\times(1.1)^{10-n}$. The total future value of the repayment is: $$75(1.1^9)+75(1.1^8)+\cdots+75(1.1^0).$$ Note that I have assumed that the interest is charged before the repayments are made. This sequence is a geometric progression . We consider it as a geometric sequence in reverse to make the maths easier. It has first term ($a$) 75, each term 1.1 times the previous ($r$) and 10 terms ($n$). The sum is given by the formula: $$\frac{a{r^{n-1}}}{r-1}=\frac{75{1.1^{10-1}}}{0.1}\approx\$1195.31$$ After we have solved this first part, then it is just a standard interest with repayments problem. .
|finance|calculator|
1
Given enough time, what are the chances I can come out ahead in a coin toss contest?
Assuming I can play forever, what are my chances of coming out ahead in a coin flipping series? Let's say I want "heads"...then if I flip once, and get heads, then I win, because I've reached a point where I have more heads than tails (1-0). If it was tails, I can flip again. If I'm lucky, and I get two heads in a row after this, this is another way for me to win (2-1). Obviously, if I can play forever, my chances are probably pretty decent. They are at least greater than 50%, since I can get that from the first flip. After that, though, it starts getting sticky. I've drawn a tree graph to try to get to the point where I could start see the formula hopefully dropping out, but so far it's eluding me. Your chances of coming out ahead after 1 flip are 50%. Fine. Assuming you don't win, you have to flip at least twice more. This step gives you 1 chance out of 4. The next level would be after 5 flips, where you have an addtional 2 chances out of 12, followed by 7 flips, giving you 4 out of
The question can be answered using Catalan numbers . Let C_n denote the number of sequences of 2n coin tosses in which you are never ahead. Formally, we count sequences in which every prefix has no less T 's than H 's. We call this property A . The number of total sequences of length 2n is $2^{2n}$. We then show that as n→∞, the ratio $C_n / 2^{2n}$ tends to 0. This means that in almost every sequence you will eventually be ahead (the chances of a random sequence having property A tend to 0 as the sequence gets longer). Indeed, $C_n = \frac{(2n)!}{(n+1)!n!}$ so $C_n / 2^{2n} = \frac{(2n)!}{2^{2n}} \cdot \frac{1}{(n+1)!n!}$ and it can be shown that this tends to 0 by Stirling's approximation (multiply and divide by $(2n/e)^{2n}$).
|probability|
0
Repayments of a loan with compound interest
Suppose I have a loan of M dollars. At the end of each year, I am charged interest at rate R and make a repayment of P. The loan is repaid after n years. How long (n) does it take to repay the loan if I am given the other variables? How much are the repayments of P if I am given the other variable? Suppose that the payments were at the start of the year. How would this change the problem?
Suggest you look into amortization calculators.
|finance|
0
Why is the derivative of a circle's area its perimeter (and similarly for spheres)?
When differentiated with respect to $r$, the derivative of $\pi r^2$ is $2 \pi r$, which is the circumference of a circle. Similarly, when the formula for a sphere's volume $\frac{4}{3} \pi r^3$ is differentiated with respect to $r$, we get $4 \pi r^2$. Is this just a coincidence, or is there some deep explanation for why we should expect this?
Consider increasing the radius of a circle by an infinitesimally small amount, $dr$. This increases the area by an annulus (or ring) with inner radius $2 \pi r$ and outer radius $2\pi(r+dr)$. As this ring is extremely thin, we can imagine cutting the ring and then flattening it out to form a rectangle with width $2\pi r$ and height $dr$ (the side of length $2\pi(r+dr)$ is close enough to $2\pi r$ that we can ignore that). So the area gain is $2\pi r\cdot dr$ and to determine the rate of change with respect to $r$, we divide by $dr$ and so we get $2\pi r$. Please note that this is just an informative, intuitive explanation as opposed to a formal proof. The same reasoning works with a sphere, we just flatten it out to a rectangular prism instead.
|calculus|geometry|derivatives|circles|area|
0
Balance chemical equations without trial and error?
In my AP chemistry class, I often have to balance chemical equations like the following: $$ \mathrm{Al} + \text O_2 \to \mathrm{Al}_2 \mathrm O_3 $$ The goal is to make both side of the arrow have the same amount of atoms by adding compounds in the equation to each side. A solution: $$ 4 \mathrm{Al} + 3 \mathrm{ O_2} \to 2 \mathrm{Al}_2 \mathrm{ O_3} $$ When the subscripts become really large, or there are a lot of atoms involved, trial and error is impossible unless performed by a computer. What if some chemical equation can not be balanced? (Do such equations exist?) I tried one for a long time only to realize the problem was wrong. My teacher said trial and error is the only way. Are there other methods?
Yes; it's possible to write a system of equations that can be solved to find the correct coefficients. Here's an example for the given formula. We're trying to find coefficients A, B, and C such that $A (\mathrm{Al}) + B (\mathrm{O_2}) \rightarrow C (\mathrm{Al_2 O_3})$ In order to do this, we can write an equation for each element based on how many atoms are on each side of the equation. for Al: $A = 2C$ for O: $2B = 3C$ This is an uninteresting example, but these will always be linear equations in terms of the coefficients. Note that we have fewer equations than variables. This means that there's more than one way to correctly balance the equation (and there is, because any set of coefficients can be scaled by any factor). We just need to find one integral solution to these equations. To solve, we can arbitrarily set one of the variables to 1 and we'll get a solution with (probably fractional) coefficients. If we add $A=1$, the solution is $(A,B,C) = (1,\frac{3}{4},\frac{1}{2})$. To
|linear-algebra|systems-of-equations|chemistry|
1
Why is the volume of a cone one third of the volume of a cylinder?
The volume of a cone with height $h$ and radius $r$ is $\frac{1}{3} \pi r^2 h$ , which is exactly one third the volume of the smallest cylinder that it fits inside. This can be proved easily by considering a cone as a solid of revolution , but I would like to know if it can be proved or at least visual demonstrated without using calculus.
You can use Pappus's centroid theorem as in my answer here , but it does not provide much insight. If instead of a cylinder and a cone, you consider a cube and a square-based pyramid where the "top" vertex of the pyramid (the one opposite the square base) is shifted to be directly above one vertex of the base, you can fit three such pyramids together to form the complete cube. (I've seen this as physical toy/puzzle with three pyramidal pieces and a cubic container.) This may give some insight into the 1/3 "pointy thing rule" (for pointy things with similar, linearly-related cross-sections) that Katie Banks discussed in her comment.
|geometry|3d|volume|solid-geometry|
0
What transformations of the plane are geometrically constructable (compass & straight edge)?
Congruence transformations (isometries) and similarity transformations (isometries + dilations) should be constructable. What about other affine transformations? Other conformal mappings? edit : by constructable, I mean given the defining information for the transformation in a geometric way (e.g. a dilation requires a center and a ratio, so the given could be a point and two segments), can you construct the image of a point under the transformation from its preimage?
edit (2010-07-26) : The question is much more involved than I'd originally thought. As implied in the question, I knew that congruence and similarity transformations are constructible. Immediately below this section is my original answer, which only demonstrates congruence transformations and was intended more to give an idea of what an answer might look like (since, at the time, there was another answer that was not particularly helpful). In the last section of this answer is my justification that all affine transformations of the plane are constructible. In re-reading that now, I realize that I'd assumed the ability to construct a point, say $P'$, on a line, say $\overleftrightarrow{RP}$, such that $\frac{RP'}{RP}$ is equal to some known ratio. This is equivalent to being able to construct the dilation of $P$ by the known ratio about center $R$. I've added the construction of such a dilation below the congruence transformation section. edit (2012-01-28) : A conversation with some col
|geometry|euclidean-geometry|geometric-construction|transformational-geometry|
1
Why is the volume of a cone one third of the volume of a cylinder?
The volume of a cone with height $h$ and radius $r$ is $\frac{1}{3} \pi r^2 h$ , which is exactly one third the volume of the smallest cylinder that it fits inside. This can be proved easily by considering a cone as a solid of revolution , but I would like to know if it can be proved or at least visual demonstrated without using calculus.
One can cut a cube into 3 pyramids with square bases -- so for such pyramids the volume is indeed 1/3 hS. And then one uses Cavalieri's principle to prove that the volume of any cone is 1/3 hS.
|geometry|3d|volume|solid-geometry|
0
Why are differentiable complex functions infinitely differentiable?
When I studied complex analysis, I could never understand how once-differentiable complex functions could be possibly be infinitely differentiable. After all, this doesn't hold for functions from $\mathbb R ^2$ to $\mathbb R ^2$. Can anyone explain what is different about complex numbers?
The proofs I have seen derive this as a corollary of Cauchy's integral formula . Look at the difference quotient as an integral, play around with it, and you get that it converges to what you'd get if you differentiated under the integral sign. Note that since harmonic functions also satisfy a similar integral equation, they are also infinitely differentiable in the same way (this also follows since they are real and imaginary parts of holomorphic functions).
|complex-analysis|
0
Importance of Representation Theory
Representation theory is a subject I want to like (it can be fun finding the representations of a group), but it's hard for me to see it as a subject that arises naturally or why it is important. I can think of two mathematical reasons for studying it: The character table of a group is packs a lot of information about the group and is concise. It is practically/computationally nice to have explicit matrices that model a group. But there must certainly be deeper things that I am missing. I can understand why one would want to study group actions (the axioms for a group beg you to think of elements as operators), but why look at group actions on vector spaces? Is it because linear algebra is so easy/well-known (when compared to just modules, say)? I am also told that representation theory is important in quantum mechanics. For example, physics should be $\mathrm{SO}(3)$ invariant and when we represent this on a Hilbert space of wave-functions, we are led to information about angular mome
Particles correspond to specific vectors in a representation, not to $G$ -orbits! The reason has to do with "symmetry breaking." The $8$ particles in the meson octet correspond to a basis of a certain $8$ -dimensional representation of the group $\mathrm{SU}(3)$ called the "adjoint representation." At high enough energies these particles would be indistinguishable. But at low energies the " $\mathrm{SU}(3)$ symmetry has been broken" and the particles become distinguishable. Another good physics example that's easier to understand is that the orbital states of electrons in atoms correspond to representations of the group $\mathrm{SO}(3)$ of symmetries of space (well, really $\mathrm{SU}(2)$ if you want to incorporate spin). Try reading a standard quantum mechanics textbook for a little bit of this picture and then try thinking about it in terms of representation theory.
|representation-theory|physics|
0
Why is the volume of a cone one third of the volume of a cylinder?
The volume of a cone with height $h$ and radius $r$ is $\frac{1}{3} \pi r^2 h$ , which is exactly one third the volume of the smallest cylinder that it fits inside. This can be proved easily by considering a cone as a solid of revolution , but I would like to know if it can be proved or at least visual demonstrated without using calculus.
A visual demonstration for the case of a pyramid with a square base. As Grigory states , Cavalieri's principle can be used to get the formula for the volume of a cone. We just need the base of the square pyramid to have side length $ r\sqrt\pi$ . Such a pyramid has volume $\frac13 \cdot h \cdot \pi \cdot r^2. $ Then the area of the base is clearly the same. The cross-sectional area at distance a from the peak is a simple matter of similar triangles: The radius of the cone's cross section will be $a/h \times r$ . The side length of the square pyramid's cross section will be $\frac ah \cdot r\sqrt\pi.$ Once again, we see that the areas must be equal. So by Cavalieri's principle, the cone and square pyramid must have the same volume: $ \frac13\cdot h \cdot \pi \cdot r^2$
|geometry|3d|volume|solid-geometry|
1
How is prisoner's dilemma different from chicken?
Chicken is a famous game where two people drive on a collision course straight towards each other. Whoever swerves is considered a 'chicken' and loses, but if nobody swerves, they will both crash. So the payoff matrix looks something like this: B swerves B straight A swerves tie A loses, B wins A straight B loses, A wins both lose But I have heard of another situation called the prisoner's dilemma, where two prisoners are each given the choice to testify against the other, or remain silent. The payoff matrix for prisoner's dilemma also looks like B silent B testify A silent tie A loses, B wins A testify B loses, A wins both lose I remember hearing that in the prisoner's dilemma, it was always best for both prisoners to testify. But that makes no sense if you try to apply it to chicken: both drivers would crash every time, and in real life, almost always someone ends up swerving. What's the difference between the two situations?
The games aren't just about winning or losing, but also about utility. Here is a more accurate table for chicken: B swerves B straight A swerves No gain for either A loses, B wins A straight B loses, A wins Both have large loss Here is one for prisoners dilemma: B silent B testify A silent Both have small loss A large loss, B loses nothing A testify B loses nothing, A large loss, both medium lose In the prisoners dilemma, an individual prisoner will always do better by testifying (look at the table), however, by both testifying they end up in a worse position than if both were silent. In contrast, in chicken, going straight will be better if the other swerves and swerving will be better if the other goes straight. Your tables represent some strategies as being equally good for players when they are not.
|game-theory|
0
How is prisoner's dilemma different from chicken?
Chicken is a famous game where two people drive on a collision course straight towards each other. Whoever swerves is considered a 'chicken' and loses, but if nobody swerves, they will both crash. So the payoff matrix looks something like this: B swerves B straight A swerves tie A loses, B wins A straight B loses, A wins both lose But I have heard of another situation called the prisoner's dilemma, where two prisoners are each given the choice to testify against the other, or remain silent. The payoff matrix for prisoner's dilemma also looks like B silent B testify A silent tie A loses, B wins A testify B loses, A wins both lose I remember hearing that in the prisoner's dilemma, it was always best for both prisoners to testify. But that makes no sense if you try to apply it to chicken: both drivers would crash every time, and in real life, almost always someone ends up swerving. What's the difference between the two situations?
(See http://en.wikipedia.org/wiki/Chicken_%28game%29#Prisoner.27s_dilemma .) The difference is in the payoff. In the "chicken" game , the payoff matrix is like Sw St Sw 0, 0 -1, +1 St +1, -1 -10, -10 While in the PD game: Si Te Si 3, 3 0, 5 Te 5, 0 1, 1 Both games have this structure in the payoff table: C D C R, R S, T D T, S P, P But: In PD game, the order is T > R > P > S In Chicken game, the order is T > R > S > P This leads a different Nash equilibria. In the PD game, if A remains silent, B chooses to testify because T > R, while if A testifies, B should also testify because P > S. So testifying is B's most rational choice after considering all possibilities. But in the Chicken game, as S > P, if A goes straight, B should swerve. This leads to two Nash equilibria in the pure game: (St, Sw) and (Sw, St).
|game-theory|
1
How can there be explicit polynomial equations for which the existence of integer solutions is unprovable?
This answer suggests that there are explicit polynomial equations for which the existence (or nonexistence) of integer solutions is unprovable. How can this be?
The answer to this question depends on how the problem is defined, but the answer is no, at least without defining the problem in a misleading way. Consider polynomial p. If it has an integer solution, then the solution will eventually be found by random guessing. So if it is impossible to prove the existential status, there must be no solution. Now we know from this link that there is a polynomial, q, that is unsolvable in the integers iff ZFC is consistent. It is well known that ZFC cannot prove its own consistency. So if it ZFC is consistent, then q is unsolvable, but we cannot prove this as then we could prove ZFC. So it seems like it is accurate to say that if mathematics is consistent, we have a polynomial with no integer roots, but we can't prove it. However, if we are assuming maths is consistent, we can use this to prove that the equation is unsolvable (indeed that is what we have done). So, it really isn't accurate an accurate statement at all. To further clarify, when consid
|logic|diophantine-equations|
0
What are Your Favourite Maths Puzzles?
We all love a good puzzle To a certain extent, any piece of mathematics is a puzzle in some sense: whether we are classifying the homological intersection forms of four manifolds or calculating the optimum dimensions of a cylinder, it is an element of investigation and inherently puzzlish intrigue that drives us. Indeed most puzzles (cryptic crosswords aside) are somewhat mathematical (the mathematics of Sudoku for example is hidden in latin squares ). Mathematicians and puzzles get on, it seems, rather well. But what is a good puzzle? Okay, so in order to make this question worthwhile (and not a ten-page wade-athon through 57 varieties of the men with red and blue hats puzzle), we are going to have to impose some limitations. Not every puzzle-based answer that pops into your head will qualify as a good puzzle—to do so it must Not be widely known: If you have a terribly interesting puzzle that motivates something in cryptography; well done you, but chances are we've seen it. If you saw
A probability problem I love. Take a shuffled deck of cards. Deal off the cards one by one until you reach any Ace. Turn over the next card, and note what it is. The question : which card has a higher probability of being turned over, the Ace of Spades or the Two of Hearts?
|soft-question|recreational-mathematics|puzzle|big-list|
0
Real world uses of homotopy theory
I covered homotopy theory in a recent maths course. However I was never presented with any reasons as to why (or even if) it is useful. Is there any good examples of its use outside academia?
Robert Ghrist is an amazing applied mathematician who uses a lot of interesting algebraic topology for engineering applications. He uses homology and sheaf theory. I claim this answers your question since homology is a generalization of homotopy theory. Relevant link: http://www.math.uiuc.edu/~ghrist/index_files/research.htm
|soft-question|big-list|applications|homotopy-theory|
0
How can I write an equation that matches any sequence?
One thing I have been wondering about lately is how to write an equation that describes a pattern of numbers. What I mean is: x 0 1 2 y 1 5 9 If I have this, I can tell that an equation that describes this would be $y=4x+1$ . In fact, I don't even need the third pair of numbers. It's very easy when the equation is a straight line. But when the equation is a parabola, its not always that easy. For example: x 0 1 2 y 1 2 5 I can tell this is $ y=x^2+1$ , because I recognize the pattern. But I can't always tell just by looking at the numbers what the right equation should be. Is there some way to always know the right equation? I know that if you get the $x=0$ term you can get the $c$ in $y=ax^2+bx+c$ , but that's not enough to let me solve it like I can when the equation is just a line. For example, can someone show me how you would do it for x 0 1 2 y 5 4 7 It's not a homework question, I promise!
If you know your relationship is going to be a polynomial , then there are some pretty (conceptually) simple ways you can do this. If you know what degree your polynomial is (line, parabola, cubic, etc.) then your job will be much easier. But if not, then you simply need to look at the number of points you have. If you are given one point, the best you can do is of degree 0 ( y = k ) If you are given two points, the best you can do is of degree 1 ( y = A x + B ) If you are given three points, the best you can do is of degree 2 ( y = A x 2 + B x + C ) If you are given four points, the best you can do is of degree 3 ( y = A x 3 + B x 2 + C x + D ) etc. When I say "the best you can do", what I mean is -- if you have a parabola, but are only given two points, then you really can't identify the parabola. But you can say that it's a simple line. Let's assume you have three points. The "best you can do" is assume that it is degree 2. If it is actually of degree one, your answer will magically
|algebra-precalculus|
1
Proof for multiplying generating functions
I've learned that multiplying two generating functions $f(x)$ and $g(x)$ will give the result \begin{equation*} \sum_{k=0}^\infty\left(\sum_{j=0}^k a_j\,b_{k-j}\right)x^k. \end{equation*} I've used the result, but it was presented in my class without proof and I'm having some trouble tracking one down. Weak google-foo today, I suppose. Can anyone give me a pointer to a proof? If this is a question better answered in book form, that is fine as well.
It is actually the other way round. A generating function is generally defined to have an addition operation where the components are added and a multiplication operation like that you mentioned. Once we have made these definitions, we observe that polynomials obey the same laws and so that it is convenient to represent generating functions as infinite polynomials rather than just an infinite tuple.
|generating-functions|sequences-and-series|
1
Proof for multiplying generating functions
I've learned that multiplying two generating functions $f(x)$ and $g(x)$ will give the result \begin{equation*} \sum_{k=0}^\infty\left(\sum_{j=0}^k a_j\,b_{k-j}\right)x^k. \end{equation*} I've used the result, but it was presented in my class without proof and I'm having some trouble tracking one down. Weak google-foo today, I suppose. Can anyone give me a pointer to a proof? If this is a question better answered in book form, that is fine as well.
Casebash is correct that this is a definition and not a theorem. But the motivation from 3.48 (Defintion of product of series) of little Rudin may convince you that this is a good definition: $\sum_{n=0}^{\inf} a_n z^n \cdot \sum_{n=0}^{\inf} b_n z^n = (a_0+a_1z+a_2z^2+ \cdots)(b_0+b_1z+b_2z^2+ \cdots)$ $=a_0b_0+(a_0b_1 + a_1b_0)z + (a_0b_2+a_1b_1+a_2b_0)z^2 + \cdots$ $=c_0+c_1z+c_2z^2+ \cdots $ where $c_n=\sum_{k=0}^n a_k b_{n-k}$
|generating-functions|sequences-and-series|
0
How can I write an equation that matches any sequence?
One thing I have been wondering about lately is how to write an equation that describes a pattern of numbers. What I mean is: x 0 1 2 y 1 5 9 If I have this, I can tell that an equation that describes this would be $y=4x+1$ . In fact, I don't even need the third pair of numbers. It's very easy when the equation is a straight line. But when the equation is a parabola, its not always that easy. For example: x 0 1 2 y 1 2 5 I can tell this is $ y=x^2+1$ , because I recognize the pattern. But I can't always tell just by looking at the numbers what the right equation should be. Is there some way to always know the right equation? I know that if you get the $x=0$ term you can get the $c$ in $y=ax^2+bx+c$ , but that's not enough to let me solve it like I can when the equation is just a line. For example, can someone show me how you would do it for x 0 1 2 y 5 4 7 It's not a homework question, I promise!
Given a list of terms of a sequence as you describe, one technique that may be of use (supplementary to Justin's answer) is finite differences. Calculate the differences between successive terms. If these first differences are constant, then a linear equation fits the terms you have. If not, compute the differences of the differences. If these second differences are constant, then a quadratic equation fits the terms you have. If not, you can continue to compute differences until you reach a constant difference (in the nth differences means an nth degree polynomial), differences that are a constant multiple of the previous differences (exponential of some sort), or you run out of terms. In any case, what you find is limited to matching the terms you know, as without some kind of general rule for the sequence, the first unknown term could be anything and completely alter the pattern (and with n known terms, a polynomial of degree n-1 will always perfectly fit).
|algebra-precalculus|
0
Finding Moment of Inertia (Rotational Inertia?) $I$ Using Integration?
I just came back from my Introduction to Rotational Kinematics class, and one of the important concepts they described was Rotational Inertia , or Moment of Inertia . It's basically the equivalent of mass in Netwon's $F = m a$ in linear motion. The equivalent rotational equation is $\tau = I \alpha$ , where $\tau$ is rotational force, $\alpha$ is rotational acceleration, and $I$ is rotational inertia. For a point about an axis, $I$ is $m r^2$ , where $r$ is the distance from the point to the axis of rotation. For a continuous body, this is an integral -- $I = \int r^2 \,dm$ . This really doesn't make any sense to me...you have two independent variables? I am only used to having one independent variable and one constant. So I would solve this, using my experience with calculus (which encompasses a read through the Sparks Notes packet) as $ I = m r^2 $ . But obviously, this is wrong? $r$ is not a constant! How do I deal with it? Do I need to replace $r$ with an expression that varies wit
Seeing an expression like $I = \int r^2 dm$ is certainly confusing the first time you see it. When you see an expression like $I = \int x^2 dx$ we effectively iterating over a range on the x axis and adding up the area of an infinite amount of infinitesimally small strips. Note that the area of each strip is approximated as the function value (x^2) times the strip length (dx). When you see an expression like dm, we are iterating over masses instead. We still sum up the function value (r^2), but this time we multiply it by the strip mass (dm). To solve the problem, we usually put m in terms of another variable which we can iterate over more easily. For example, consider the moment of inertia of a rod of length L around its center with total mass of L. Each bit of length (dx) has mass (dm) and r=|x|. Solving for $I = \int r^2 dm = \int |x|^2 dx = \int x^2 dx = (x^3)/3+c$. Now, we have definite values for x to sub in (-L/2 and L/2), so we write $$ I = \frac{(L/8)}{3}-\frac{(-L/8)}{3}=\fra
|calculus|physics|
1
What's an intuitive way to think about the determinant?
In my linear algebra class, we just talked about determinants. So far I’ve been understanding the material okay, but now I’m very confused. I get that when the determinant is zero, the matrix doesn’t have an inverse. I can find the determinant of a $2\times 2$ matrix by the formula. Our teacher showed us how to compute the determinant of an $n \times n$ matrix by breaking it up into the determinants of smaller matrices. Apparently there is a way by summing over a bunch of permutations. But the notation is really hard for me and I don’t really know what’s going on with them anymore. Can someone help me figure out what a determinant is, intuitively, and how all those definitions of it are related?
Your trouble with determinants is pretty common. They’re a hard thing to teach well, too, for two main reasons that I can see: the formulas you learn for computing them are messy and complicated, and there’s no “natural” way to interpret the value of the determinant, the way it’s easy to interpret the derivatives you do in calculus at first as the slope of the tangent line. It’s hard to believe things like the invertibility condition you’ve stated when it’s not even clear what the numbers mean and where they come from. Rather than show that the many usual definitions are all the same by comparing them to each other, I’m going to state some general properties of the determinant that I claim are enough to specify uniquely what number you should get when you put in a given matrix. Then it’s not too bad to check that all of the definitions for determinant that you’ve seen satisfy those properties I’ll state. The first thing to think about if you want an “abstract” definition of the determi
|linear-algebra|matrices|determinant|intuition|
1
Why does the discriminant of a cubic polynomial being less than $0$ indicate complex roots?
The discriminant $\Delta = 18abcd - 4b^3d + b^2 c^2 - 4ac^3 - 27a^2d^2$ of the cubic polynomial $ax^3 + bx^2 + cx+ d$ indicates not only if there are repeated roots when $\Delta$ vanishes , but also that there are three distinct, real roots if $\Delta > 0$, and that there is one real root and two complex roots (complex conjugates) if $\Delta Why does $\Delta
These implications are reached by considering the three, different cases for the roots $\{ r_1, r_2, r_3 \}$ of the polynomial: repeated root, all distinct real roots, or two complex roots and one real root. When one of the roots is repeated, say $r_1$ and $r_2$, then it is clear that the discriminant is $0$ because the $r_1 - r_2$ term of the product is $0$. When one root is a complex number $\rho = x+ yi$, then by the complex conjugate root theorem , $\overline{\rho} = x - yi$ is also a root. By the same theorem, the remaining third root must be real. Evaluating the product in the discriminant for this case, $$ \begin{align*} (\rho - \overline{\rho})^2 (\rho - r_3)^2 (\overline{\rho} - r_3)^2 &= (2yi)^2 (x + yi - r^3)^2 (x - yi - r^3)^2 \\ &= -4y^2 [((x - r_3) + yi) ((x - r_3) - yi) ]^2 \\ &= -4y^2 ((x - r_3)^2 + y^2)^2 \end{align*} $$ which is less than or equal to $0$. Finally, when all roots are real, the product is clearly positive. Putting it all together, $\Delta$: less than $0
|algebra-precalculus|polynomials|roots|symmetric-polynomials|
1
When you randomly shuffle a deck of cards, what is the probability that it is a unique permutation never before configured?
I just came back from a class on Probability in Game Theory, and was musing over something in my head. Assuming, for the sake of the question: Playing cards in their current state have been around for approximately eight centuries A deck of playing cards is shuffled to a random configuration one billion times per day Every shuffle ever is completely (theoretically) random and unaffected by biases caused by human shuffling and the games the cards are used for By "deck of cards", I refer to a stack of unordered $52$ unique cards, with a composition that is identical from deck to deck. This would, approximately, be on the order of $3 \cdot 10^{14}$ random shuffles in the history of playing cards. If I were to shuffle a new deck today, completely randomly, what are the probabilistic odds (out of $1$) that you create a new unique permutation of the playing cards that has never before been achieved in the history of $3 \cdot 10^{14}$ similarly random shuffles? My first thought was to think t
Your original answer of $\dfrac{3 \times 10^{14}}{52!}$ is not far from being right. That is in fact the expected number of times any ordering of the cards has occurred. The probability that any particular ordering of the cards has not occurred, given your initial assumptions, is $\left(1-\frac1{52!}\right)^{(3\times10^{14})}$, and the probability that it has occurred is 1 minus this value. But for small values of $n\epsilon$, $(1+\epsilon)^n$ is nearly $1+n\epsilon$. In particular, since $52!\approx 8\times 10^{67}$ and so $\dfrac{3\times10^{14}}{52!}\approx 3.75\times 10^{-54}$ is microscopically small, $1-\left(1-\frac1{52!}\right)^{(3\times10^{14})}$ is very nearly $\frac1{52!}\times (3\times10^{14})$.
|probability|
1
When you randomly shuffle a deck of cards, what is the probability that it is a unique permutation never before configured?
I just came back from a class on Probability in Game Theory, and was musing over something in my head. Assuming, for the sake of the question: Playing cards in their current state have been around for approximately eight centuries A deck of playing cards is shuffled to a random configuration one billion times per day Every shuffle ever is completely (theoretically) random and unaffected by biases caused by human shuffling and the games the cards are used for By "deck of cards", I refer to a stack of unordered $52$ unique cards, with a composition that is identical from deck to deck. This would, approximately, be on the order of $3 \cdot 10^{14}$ random shuffles in the history of playing cards. If I were to shuffle a new deck today, completely randomly, what are the probabilistic odds (out of $1$) that you create a new unique permutation of the playing cards that has never before been achieved in the history of $3 \cdot 10^{14}$ similarly random shuffles? My first thought was to think t
Suppose we shuffle a deck and get a permutation p. For each previous shuffling there is a 1-1/52! chance that p doesn't match it. Each previous shuffling is independent, in that regardless of what p and the other permutations are, the chance of p matching the shuffling is 1-1/52! When probabilities are independent we can simply multiple them to find the chance of all the events happening. In this case, the each event is actually a match not happening, so the chance of no matches given n previous shuffles is (1-1/52!)^n. We can then complete the calculations as Michael did.
|probability|
0
How do Lagrange multipliers work to find the lowest value of a function subject to a constraint?
I have been using Lagrange multipliers in constrained optimization problems, but I don't see how they actually work to simultaneously satisfy the constraint and find the lowest possible value of an objective function.
This type of problem is generally referred to as constrained optimization . A general technique to solve many of these types of problems is known as the method of Lagrange multipliers , here is an example of such a problem using Lagrange multipliers and a short justification as to why the technique works. Consider the parabaloid given by $f(x,y) = x^2 + y^2$. The global minimum of this surface lies at the origin (at $x=0$, $y=0$). If we are given the constraint, a requirement on the relationship between $x$ and $y$, that $3x+y=6$, then the origin can no longer be our solution (since $3\cdot 0 + 1 \cdot 0 \neq 6$). Yet, there is a lowest point on this function satisfying the given constraint. What we have so far: Objective function: $f(x,y) = x^2 + y^2$, subject to: $3x+y=6$. From here we can derive the Lagrange formulation of our constrained minimization problem. This will be a function $L$ of $x$, $y$, and a single Lagrange multiplier $\lambda$ (since we have only a single constraint)
|calculus|optimization|lagrange-multiplier|
0
Probability that two people see each other at the coffee shop
Two mathematicians each come into a coffee shop at a random time between 8:00 a.m. and 9:00 a.m. each day. Each orders a cup of coffee then sits at a table, reading a newspaper for 20 minutes before leaving to go to work. On any day, what is the probability that both mathematicians are at the coffee shop at the same time (that is, their arrival times are within 20 minutes of each other)?
Working in hours and letting 8:00 a.m. be t=0, each mathematician's arrival time is a number between 0 and 1. The sample space can be represented by the unit square in the coordinate plane with one professor's arrival time as x and the other's as y, where regions with equal areas are equally likely. We want x - 1/3 The area of the desired region is 5/9.
|probability|
1
Is there a relationship between $e$ and the sum of $n$-simplexes volumes?
When I look at the Taylor series for $e^x$ and the volume formula for oriented simplexes, it makes $e^x$ look like it is, at least almost, the sum of simplexes volumes from $n$ to $\infty$. Does anyone know of a stronger relationship beyond, "they sort of look similar"? Here are some links: Volume formula http://en.wikipedia.org/wiki/Simplex#Geometric_properties Taylor Series http://en.wikipedia.org/wiki/E_%28mathematical_constant%29#Complex_numbers
The answer is, it's just a fact “cone over a simplex is a simplex” rewritten in terms of the generating function: observe that because n-simplex is a cone over (n-1)-simplex $\frac{\partial}{\partial x}vol(\text{n-simplex w. edge x}) = vol(\text{(n-1)-simplex w. edge x})$; in other words $e(x):=\sum_n vol\text{(n-simplex w. edge x)}$ satisfies an equvation $e'(x)=e(x)$. So $e(x)=Ce^x$ -- and C=1 because e(0)=1.
|geometry|intuition|sequences-and-series|
1
How can I randomly generate trees?
I want to randomly generate trees, i.e. undirected acyclic graphs with a single root, making sure that all possible trees with a fixed number of nodes n are equally likely.
Knuth says to look at it as generating all nested parentheses in lexicographic order. Look here for the details http://www-cs-faculty.stanford.edu/~uno/fasc4a.ps .
|probability-theory|random|
0
Repayments of a loan with compound interest
Suppose I have a loan of M dollars. At the end of each year, I am charged interest at rate R and make a repayment of P. The loan is repaid after n years. How long (n) does it take to repay the loan if I am given the other variables? How much are the repayments of P if I am given the other variable? Suppose that the payments were at the start of the year. How would this change the problem?
Basic Theory The way to solve this problem is to calculate how much each payment reduces your debt after you have been repaying your loan for $n$ years. Let $r=1+R/100$, ie. this converts the interest rate from a percentage to a value you can multiply your debt by to calculate how much you owe after adding one time period's interest. If I make a payment of $P$ at the end of the $k$th year, then we avoid paying interest on this money $n-k$ times and so we reduce our debt by $Pr^{n-k}$. We sum up the future values of all our payments: $\sum\limits_{k=1}^n Pr^{n-k}$ If we reverse this, it is equivalent to: $\sum\limits_{k=0}^{n-1} Pr^k$ This is a geometric series , which can be solved using the formula $\frac{ar^{n-1}}{r-1}$ where $a$ is the first term, $r$ is the factor and $n$ is the number of terms being summed. We then attempt to equate this with the debt owed after $n$ years, which is $Mr^n$. We now compare the two equations: $\frac{Pr^{n-1}}{r-1} = Mr^n$ Calculating $n$ We group the
|finance|
1
Combinations of selecting $n$ objects with $k$ different types
Suppose that I am buying cakes for a party. There are $k$ different types and I intend to buy a total of $n$ cakes. How many different combinations of cakes could I possibly bring to the party?
Let g(n,k) = # combinations of cakes. Notice that: g(n,1) = 1. (all the cakes are the same) g(n,2) = n+1. (e.g. for 5 cakes, the # of cakes of type 1 can be 0, 1, 2, 3, 4, 5) g(1,k) = k. g(2,k) = k*(k-1)/2 + k (the first term is two different cakes; the second term is when both cakes are the same), as long as k > 1. (otherwise g(2,1) = 1) g(3,k) = k * (k-1) * (k-2)/6 + k*(k-1)/2 * 2 + k (the first term is 3 different cakes; the second term is 2 different cakes, with a *2 since there are two choices for which one to duplicate, the third term is when all 3 cakes are the same), as long as k > 2. If we think of k as a radix rather than the # of cakes, then this problem is equivalent to expressing the # of distinct n-digit numbers in base k whose digits are in sorted order. (e.g. 1122399 is equivalent to 9921231) I think I can express it as a nonrecursive sum: g(n,k) = sum from j=1 to max(n,k) of { (k choose j) * h(n,j) } where h(n,j) is the # of ways to partition N cakes using j different
|combinatorics|
0
Combinations of selecting $n$ objects with $k$ different types
Suppose that I am buying cakes for a party. There are $k$ different types and I intend to buy a total of $n$ cakes. How many different combinations of cakes could I possibly bring to the party?
Using a method that's often called "stars and bars": We draw $n$ stars in a row to represent the cakes, and $k-1$ bars to divide them up. All of the stars to the left of the first bar are cakes of the first type; stars between the first two bars are of the second type; . . . . **|***||*| Here's an example with $n=6$ and $k=5$. We're getting 2 of the first type, 3 of the second type, 0 of the third type, 1 of the fourth type, and 0 of the fifth type. In order to solve the problem, we just need to reorder the stars and bars by choosing the $k-1$ spots for the bars out of the $n+k-1$ total spots, so our answer is: $$ \binom{n+k-1}{k-1}. $$
|combinatorics|
1
Do complex numbers really exist?
Complex numbers involve the square root of negative one, and most non-mathematicians find it hard to accept that such a number is meaningful. In contrast, they feel that real numbers have an obvious and intuitive meaning. What's the best way to explain to a non-mathematician that complex numbers are necessary and meaningful, in the same way that real numbers are? This is not a Platonic question about the reality of mathematics, or whether abstractions are as real as physical entities, but an attempt to bridge a comprehension gap that many people experience when encountering complex numbers for the first time. The wording, although provocative, is deliberately designed to match the way that many people actually ask this question.
Complex numbers are the final step in a sequence of increasingly "unreal" extensions to the number system that humans have found it necessary to add over the centuries in order to express significant numerical concepts. The first such "unreal number" was zero, back in the mists of time. It seems obvious to us now, but it must have seemed strange at first. How can the number of sheep I have be zero, when I don't actually have any sheep? Negative numbers are the next most obvious addition to the family of numbers. But what does it mean to have -2 apples? If I have 3 apples and you have 5, it's convenient to be able to say that I have -2 more apples than you. Even so, during the middle ages many mathematicians were very uncomfortable with the idea of negative numbers and tried to arrange their equations so that they didn't occur. Rationals (fractions) seem real enough, since I'm happy to have 2/5 of a pizza. However, this is not the number of pizzas that I have, just a ratio between 0 and
|soft-question|complex-numbers|education|philosophy|
0
Picking cakes if we need at least one of each type
I need $n$ cakes for a party. I go to the cake shop and there are $k$ different kinds of cake. For variety, I'd like to get at least one of each cake. How many ways can I do this?
Similar to the stars and bars technique , consider the n cakes as a row of n stars *. Instead of permuting them with k-1 bars | (which allows two bars next to each other, giving 0 of a type, place the k-1 bars (needed to split the n stars into k types) into the n-1 spaces between the stars, allowing at most one bar per space. The number of ways to do this is ${n-1 \choose k-1}$. Alternately, since you need one of each type, there are only n-k cakes for which you are choosing types. Using the stars and bars technique , there are ${(n-k)+k-1 \choose k-1} = {n-1 \choose k-1}$ ways to do it.
|combinatorics|
1
How can I write an equation that matches any sequence?
One thing I have been wondering about lately is how to write an equation that describes a pattern of numbers. What I mean is: x 0 1 2 y 1 5 9 If I have this, I can tell that an equation that describes this would be $y=4x+1$ . In fact, I don't even need the third pair of numbers. It's very easy when the equation is a straight line. But when the equation is a parabola, its not always that easy. For example: x 0 1 2 y 1 2 5 I can tell this is $ y=x^2+1$ , because I recognize the pattern. But I can't always tell just by looking at the numbers what the right equation should be. Is there some way to always know the right equation? I know that if you get the $x=0$ term you can get the $c$ in $y=ax^2+bx+c$ , but that's not enough to let me solve it like I can when the equation is just a line. For example, can someone show me how you would do it for x 0 1 2 y 5 4 7 It's not a homework question, I promise!
Of course there are infinite equation (even if you require them to be infinitely differentiable...) that satisfy the given constraints. As Isaac and Justin already wrote, you may always find a polynomial of degree at most n-1 (where n is the number of points given) which satisfies the given data; but you cannot be sure that this is the right answer. Moreover, if data is not exact but approximate the resulting polynomial may be quite different from the correct function, since it would likely hade huge peaks and falls. In such cases, an approximate method like least squares could be more useful.
|algebra-precalculus|
0
What transformations of the plane are geometrically constructable (compass & straight edge)?
Congruence transformations (isometries) and similarity transformations (isometries + dilations) should be constructable. What about other affine transformations? Other conformal mappings? edit : by constructable, I mean given the defining information for the transformation in a geometric way (e.g. a dilation requires a center and a ratio, so the given could be a point and two segments), can you construct the image of a point under the transformation from its preimage?
Couldn't all transformation which send each point (x,y) to another point (x',y') which can be computed from the first one by performing only the four operations and extraction of square root?
|geometry|euclidean-geometry|geometric-construction|transformational-geometry|
0
Compass-and-straightedge construction of the square root of a given line?
Given A straight line of arbitrary length The ability to construct a straight line in any direction from any starting point with the "unit length", or the length whose square root of its magnitude yields its own magnitude. Is there a way to geometrically construct (using only a compass and straightedge) the a line with the length of the square root of the arbitrary-lengthed line? What is the mathematical basis? Also, why can't this be done without the unit line length?
Without the unit-length segment--that is, without something to compare the first segment to--its length is entirely arbitrary, so can't be valued, so there's no value of which to take the square root. Let the given segment (with length x) be AB and let point C be on ray AB such that BC = 1. Construct the midpoint M of segment AC, construct the circle with center M passing through A, construct the line perpendicular to AB through B, and let D be one of the intersections of that line with the circle centered at M (call the other intersection E). BD = sqrt(x). AC and DE are chords of the circle intersecting at B, so by the power of a point theorem, AB * BC = DB * BE, so x * 1 = x = DB * BE. Since DE is perpendicular to AC and AC is a diameter of the circle, AC bisects DE and DB = BE, so x = DB^2 or DB = sqrt(x). edit : this is a special case of the more general geometric-mean construction. Given two lengths AB and BC (arranged as above), the above construction produces the length BD = sqr
|geometry|euclidean-geometry|geometric-construction|
0