title
list
subreddit
list
post_id
list
score
list
link_flair_text
list
is_self
list
over_18
list
upvote_ratio
list
post_content
stringlengths
0
20.9k
C1
stringlengths
0
9.86k
C2
stringlengths
0
10k
C3
stringlengths
0
8.74k
C4
stringlengths
0
9.31k
C5
stringlengths
0
9.71k
[ "Are you aware of any biconditional statements which are somewhat easy to prove in one direction, but are considerably more difficult to prove in the opposite direction?" ]
[ "math" ]
[ "e2wtf7" ]
[ 237 ]
[ "" ]
[ true ]
[ false ]
[ 0.97 ]
null
'Fermat's last theorem' iff 0+0=0
well, there are lots of them in the following form "topological spaces with such a property are connected iff they are path-connected", "subspace of a banach space of such a type is bounded iff it is totally bounded", where proof in one direction is obvious from the definition
There was this post on Math.Stackexchange that asked essentially the same question . Some highlights from that thread: R has the structure of a real division algebra iff n=1,2,4 or 8. One direction is easy if one just writes down the Cayley-Dickson construction. The other takes a decent amount of work and is usually proven in mid-level graduate courses. A compact 3-manifold is simply-connected if and only if it is homeomorphic to the 3-sphere. (One direction is easy the other direction is the Poincare conjecture). Let be a positive integer. Then x +y =z is solvable in positive integers if and only if n=1 or n=2. (One direction is Fermat's Last Theorem). One they point out in that thread is trivial in one direction and open in the other: "A positive integer n is even if and only if it is the sum of two primes both greater than 2." One direction is just that any prime other than 2 is odd, and the other direction is Goldbach's conjecture.
Teehee, though I mean more....relevant biconditionals 😁
To make their version less trivial, consider the variant Let n be a positive integer, then x + y = z has a solution in non-zero positive integers if and only if n=1 or n=2.
[ "What are some examples of game theory shedding light on seemingly unrelated fields of mathematics?" ]
[ "math" ]
[ "e2hfi4" ]
[ 122 ]
[ "" ]
[ true ]
[ false ]
[ 0.97 ]
null
Game theory has a major role in set theory, where an important axiom (the axiom of determinacy) is stated, and usually understood, in terms of games.
You might check out the surreal numbers
Do they actually use much game theory though? I know a ton of work is done using the language of games, but I never saw them utilize any concepts like Nash equilibrium or anything like that. It was just "okay here's a game, prove that so and so has a/no winning strategy, hey look it's polish" As you may be able to tell from my comment, I did not last very long studying descriptive set theory.
> but I never saw them utilize any concepts like Nash equilibrium That sort of depends on what you mean by "game theory", but you're right that the ideas about games they use are really different from things like Nash equilibria.
Desktop link: https://en.wikipedia.org/wiki/Surreal_number /r/HelperBot_
[ "Configuration Spaces for the Working Undergraduate: \"[T]he motivation, intuition, and basic theory concerning these spaces are quite accessible to undergraduates.\" [abstract + link to 23p PDF]" ]
[ "math" ]
[ "e2f3ot" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.82 ]
null
While it is relatively easy to prove things about homotopy groups, they are notoriously hard to compute. For example, showing that the fundamental group of S is the integers usually takes multiple pages, and this is one of the more basic constructions! I don't think pi1(S ) = Z is a fair example of the difficulty of computing homotopy groups.
It takes no more work to compute pi1(S ) than it does to compute homology or cohomology. Probably less, since they require a fair amount of homological algebra. There are difficulties in computing homotopy groups. But they only show up for higher homotopy groups. They only show up in dimensions the first nontrivial homotopy group. And pi1 is as nice as you could want. Compare pi1 of products to homology of products, for example. Well except pi1 is non-abelian. But pi1(S ) is not even non-abelian. pi1(S ) is as nice as you could want, as easy as can be expected, and is not an example of why homology is easier to compute than homotopy groups. I guess when he refers to the "multiple pages" proof, he's thinking of Hatcher's proof via covering space theory. I do think that that method of proof obfuscates it a bit, and also Hatcher's presentation of it (and many other topics) is needlessly verbose. But that doesn't justify the calumny.
The passage you quoted literally says that it's basic. He's giving a lower bound.
It’s absolutely true that homology is easier to compute than homotopy groups. I’m just saying the difficulty of pi1(S1) is not an effective demonstration of this fact. Using van kampen to compute pi1(S1) is formally very similar to using Mayer-Vietoris to compute H1(S1).
We begin with a motivating real world problem/example. Imagine the euclidean plane, ℝ as the floor of an automated factory warehouse. There are robots which must move around the warehouse to complete their programmed tasks. The statement of the problem is as follows: which paths can each of the robots take such that no collisions between robots occur? We may answer this question by computing the space of all possible arrangements of distinct points in ℝ This space encodes essential information about the safe paths these robots may take. The aforementioned space is precisely the configuration space of points in ℝ Thus, computing this space allows us to use topological techniques to gain useful information in regard to the above query ... When applied to the preceding example this definition may be interpreted as all possible arrangements of robots around the factory floor such that no two robots occupy the same location. We can thus use this space to carry out motion planning.
[ "Smooth 4d Poincare conjecture" ]
[ "math" ]
[ "e32rgu" ]
[ 174 ]
[ "" ]
[ true ]
[ false ]
[ 0.97 ]
The Poincare conjecture is still open in dimension 4 in the smooth category. It is equivalent to S having a unique smooth structure. The question is supposedly wide open. A paper posted today claims to settle this. Has anyone here read through the (12 page) paper to verify if it is correct?
I'm nowhere near qualified to give a definitive appraisal, and I've only really skimmed the proof, but my impression is that it's a serious attempt by a serious mathematician, but that the tools used seem too wimpy for the task at hand. That is, I don't see why this proof couldn't have been found by, say, Freedman or Kirby thirty years ago. Of course, this doesn't speak to the mathematical content, and stranger things have happened, but for now I am cautiously skeptical.
That paper has 9 versions on the ArXiv which is usually not a good sign.
That paper has 9 versions on the ArXiv which is usually not a good sign.
i know somebody who is an expert. i'll ask what he thinks. if he has time i'll report a summary of his thoughts.
what is going on with the downvote dogpile on this comment (and upvotes for the far too credulous response comment)? He's right that a 12 page paper is unlikely to knock out a conjecture that has stood for years and is known not to be susceptible to existing methods. dimension 4 is hard, y'all.
[ "Does anyone use github for preparing papers?" ]
[ "math" ]
[ "e2ua5y" ]
[ 11 ]
[ "" ]
[ true ]
[ false ]
[ 0.87 ]
[deleted]
Then wouldn't something like OneDrive or Dropbox that automatically syncs between the two devices make more sense? Also it's an awful lot more convenient than git push/pull and you (ideally) wouldn't have to deal with merge conflicts.
Then wouldn't something like OneDrive or Dropbox that automatically syncs between the two devices make more sense? Also it's an awful lot more convenient than git push/pull and you (ideally) wouldn't have to deal with merge conflicts.
Having to go through a browser or app seems more inconvenient considering how I work. Neither dropbox nor one drive require you to go through a browser, you can set them as folders on your computer that sync up. Also, Dropbox and OneDrive theoretically have size limitations which aren't present for github. Though, that probably wouldn't cause much of an issue for .tex and .pdf files. You could always set up a Synology box or another home NAS storage solution and have it sync your files remotely. You could even just set up a Git server on it. But the storage limitations aren't that strict, I think it's around 5GB for the free version and a TB for the subscription version.
Having to go through a browser or app seems more inconvenient considering how I work. Neither dropbox nor one drive require you to go through a browser, you can set them as folders on your computer that sync up. Also, Dropbox and OneDrive theoretically have size limitations which aren't present for github. Though, that probably wouldn't cause much of an issue for .tex and .pdf files. You could always set up a Synology box or another home NAS storage solution and have it sync your files remotely. You could even just set up a Git server on it. But the storage limitations aren't that strict, I think it's around 5GB for the free version and a TB for the subscription version.
Do you use github because you need access from multiple locations? Or for source/document control? If it’s the former then you would have the same problems with any internet hosted solution, unless it went trough your uni. If it’s the latter wouldn’t a local repository suffice?
[ "What is the most frustrating mathematical terminology you have come across?" ]
[ "math" ]
[ "e2q7l0" ]
[ 5 ]
[ "" ]
[ true ]
[ false ]
[ 0.73 ]
For example, local connectedness has nothing to do with connectedness (as far as I'm aware, but feel free to let me know). But one would think that connected implies locally connected at the very least. So, what has your experience been? Can you top this (pun intended)?
The French word for manifold is variety.
We shall define 'normal' to mean "the state of being frustrated by mathematics and its terminology".
Let's define some objects that should act a certain way... Oh wait they don't quite act that way... So let's just call the ones that do "normal"! But wait, even these still aren't right, so let's call the ones that actually act the way we want "completely normal"
To be fair, a locally connected space is one with a basis of connected open sets.
For example, local connectedness has nothing to do with connectedness The definition of local connectedness I learned in Topology was: Let X be a topological space, and let x be a point of X. Then X is if, for every open set V containing x, there exists a connected, open set U containing x that is a subset of V. X is if it is locally connected at x for all x in X. So it really is just a local version of connectedness. And yes, it is indeed unfortunate that connectedness doesn't imply local connectedness.
[ "Math book present for a smart 8-year-old?" ]
[ "math" ]
[ "e2uvwe" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.82 ]
null
Principles of mathematical analysis by Walter Rudin /s
baby rubin! perfect
The thing about smart kids is to remember that they are in fact still kids. That isn't to say that you shouldn't nurture their talent. It's great that your niece is already so smart, but give her something that will make her happy. Doesn't have to be big, just something to make her happy. If a book on maths or physics will do that, then sure! Get her a book! Does she have other interests? Maybe you could get her something on that, a book or a toy. Happy holidays to you and your family!
https://en.wikipedia.org/wiki/The_Number_Devil
Although they won’t really make you any better at maths or physics, the Horrible History books on Newton and Einstein really inspired me to go into science.
[ "How do we know if a funcion is equal to its Taylor series at every point?" ]
[ "math" ]
[ "e2t9iv" ]
[ 26 ]
[ "" ]
[ true ]
[ false ]
[ 0.89 ]
So, I am trying to learn calculus now and I have just learned about Taylor series, and how they are made. I've seen examples for e sin(x) and cos(x), in all of those cases the Taylor series converged to the function at all points, being literally the same thing as the function. But how do we know that this is true and why it isn't for other functions? Are there other examples of functions that are equal to their own Taylor series?
At the level of calculus, the way to prove e , sin x, and cos x equal their Taylor series at all x is to use a bound on the remainder term for Taylor polynomials to prove the remainder tends to 0 as the degree of the polynomial tends to infinity. If two functions equal their Taylor series on the whole real line then their composition does too, so any compositions of examples you know will also be an example, such as sin(cos(sin(e ))). Thus there are lots of examples. A function equal to its own power series on an interval is called an function. The best way to understand properties of analytic functions is to learn complex analysis because it is simpler to use complex analysis to determine when a function is complex-analytic than to determine (without complex analysis) if it is real-analytic.
A great example of a function whose Taylor series always converge is 1/(1-x): its Taylor series is 1 + x + x + ..., which clearly only converges when |x| < 1. If you get into complex analysis, there's a nice framework for analyzing why this happens: it has to do with how close the center of the Taylor series is to a vertical asymptote of the function in the complex plane. You can also have a function whose Taylor series converges but not to the right value. An example of this is e (and 0 when x=0, removing the discontinuity). This function, around zero, has a Taylor series of 0. But clearly the function is nonzero. This essentially is because the function approaches 0 when x is small at a faster rate than any polynomial, so a polynomial can't approximate it.
You might want to read about analytic functions in complex analysis.
Oh thanks, definetly gonna search about complex analysis and analytic functions now.
The Taylor series of e also satisfies x'=x, x(0)=1 so by uniqueness e equals its Taylor series everywhere.
[ "Good sites for math challenges?" ]
[ "math" ]
[ "e36vp3" ]
[ 301 ]
[ "" ]
[ true ]
[ false ]
[ 0.97 ]
I am looking for a site that features math problems that can be solved with some college-level probability and/or high school math. Think "leetcode" equivalent for math. (For those don't know leetcode, it's a programming site featuring coding challenges for interviews, with progressive levels of difficulty) My goal is to sharpen my intuition and strengthen my math analytical skills.
Project Euler for sure, but literally any math textbook will have good math problems.
Art of Problem Solving
I would honestly say PE is geared more towards number theory, programming, and computer science. You definitely cannot solve these problems without computers. You also can't hold a chance if you don't understand proof methods/functional programming.
You can try the MATHCOUNTS Trainer on AoPS. It starts off really easy and gets progressively harder as you answer questions correctly. Or you could take a look at past problems from various math competitions and attempt questions that seem interesting.
Yea ignore them. You are there to do maths haha...not give in to their crap =)
[ "Literature on finite-horizon MDP's with binary action space but huge, stochastic state space" ]
[ "math" ]
[ "e2nka3" ]
[ 2 ]
[ "" ]
[ true ]
[ false ]
[ 0.67 ]
I'm working on a project that can essentially be classified as a sequential decision-making process, where at each step of the process, there is a simple binary decision, where one of the options is to terminate the process, and the other is to continue. The difficulty is that the state space is enormous (2^N where N could number in the hundreds), and the transition between states is stochastic. Does anyone know of any literature that addresses this kind of problem? Or even just what to search for? I'd also be very interested if it can be generalized to a larger action space that considers multiple binary (stop/continue) decisions in parallel. Would appreciate any advice.
This sounds like the mathematical question that neural networks are trying to answer tbh
I would rather avoid neural networks because 1) I don't have a lot of data 2) the reward function is incredibly scenario/state specific and nonlinear so I'm afraid the value function would not converge during training. I have thought about it though; I'd rather have a computationally feasible rough answer than an infeasible accurate one.
Oooo then maybe what you would want is dimensionality reduction algorithms! (Or unsupervised learning if you think about it as learning) Essentially the idea is that, given a massive dimensional vector space, the dimensionality of the actually useful parts is significantly less. Have you heard of Principal component analysis/singular value decomposition?
Yeah I have... honestly the reason I'm asking is because I actually already have developed an algorithm that solves this (without using neural networks), and I'd like to publish it but don't know if it's already been done. I haven't been able to find anything from a cursory Google search, but was hoping someone more knowledgeable could point me to a paper that had already solved it (well not actually hoping because then I can't publish).
I would be surprised if it has been done before because it looks quite similar to the halting problem (which is an open problem). Have you checked out the conditional halting problem?
[ "Learning ODEs After the Class" ]
[ "math" ]
[ "e2hnkf" ]
[ 14 ]
[ "" ]
[ true ]
[ false ]
[ 0.86 ]
Heyoo, So I'm nearing the end of a course on ODEs at my uni and have been left wondering "when are you going to actually learn about ODEs?" We're getting to phase diagrams and such now, which feel far more like actually doing ODEs than the rote calculus and linear algebra we were doing before, but I still don't feel like I've actually learned anything about differential equations. The few glimpses I've seen of actual math through the computations has me intensely interested in differential equations and I absolutely intend to get into PDEs after learning a bit more functional analysis, but how much should I go learn about the ordinary case first? I'm eyeing Taylor's 3 part series on PDEs which has a first chapter on ODEs but don't know if that'd be adequate. Should I just stick to that or read Arnold's/Hirsch and Smale's text on the subject? (I intend to Arnold's classical mechanics text eventually but have not gathered the courage to wade through the physics yet) Edit: I'm familiar with topology, geometry and algebra at a first year grad level, analysis up to some basic measure theory, but only a bit of complex. Just haven't touched differential equations much.
There is very little ODE material in an introductory PDE class, and for that matter, not too much ODE material in general. Really the only thing I have ever needed for my undergrad study was the solution to a couple of simple ODEs like y''+y=0 and y"-y=0. Even in my graduate study, I don't use much ODE theory. Of course it's going to depend a lot on what specifically you choose to study, but you should find that ODEs and PDEs are vastly different.
When someone comes out of introductory ODE without having their spirit crushed by the usually terrible curriculum, typically there are three pathways of study one could take to further explore. this guy on stack exchange makes a very impassioned argument to use Hale's text.
I do mostly grad stuff but took undergrad ODEs which was a mistake. Not looking at ODE material in a grad class, but rather that specific book which focuses mostly at bringing together geometry and analysis (book 2 has Atiyah Singer) so there's a chapter on ODEs that looks pretty legit but quite unlike the computations we did in class or even the ones in a book like Arnold. I see ODEs come up a lot in mathematical physics and with vector fields on manifolds but don't see how that's really connected to what I learned before
It sounds like what you actually want is a book on Dynamical Systems.
You should find a good textbook and read the proofs of the big ODE theorems (Picard's theorem etc). You're not gonna get much from UofA undergraduate courses on differential equations unless you wanted to take honors ODEs (which I believe is 336)
[ "What Are You Working On?" ]
[ "math" ]
[ "e2i3le" ]
[ 13 ]
[ "" ]
[ true ]
[ false ]
[ 0.93 ]
This recurring thread will be for general discussion on whatever math-related topics you have been or will be working on over the week/weekend. This can be anything from math-related arts and crafts, what you've been learning in class, books/papers you're reading, to preparing for a conference. All types and levels of mathematics are welcomed!
Solved an equation. The solution fits on two screens. Now, to simplify...
I have to write an expository paper instead of a final exam. I've chosen to write about analytic wave-front sets, and am now in awe at how all the writings the subject have buried the very meaningful and intuitive dynamical idea here under mounds and mounds of technical language. Even in a book by the extremely friendly Guillemin, his chapter on the topic suddenly switches tone and is just as unmotivated as everyone else's. Also I've been tapped to teach a relatively advanced course next semester! I'm extremely excited because nothing makes you GET that subject you learned in undergrad like having to teach it.
That's not a bad idea actually. If only I had €1,590.
Just Grade 11 math. (Because I’m in grade 11) Mastering my trigonometry basics + I just started with Limits. Finding it really interesting because of the heavy manipulation one has to execute.
Click Simplify[expr] on Mathematica, should do the trick :p
[ "“Obvious” is perhaps the most ill-defined term in mathematics. How should it be used?" ]
[ "math" ]
[ "e2opex" ]
[ 8 ]
[ "" ]
[ true ]
[ false ]
[ 0.69 ]
We’ve all read proofs where the author just writes “obvious” as the justification for a step which is to us. Does this reflect poor writing on the part of the author, or poor thinking on the part of the student, and how does one know? I think that, in a proof, one should almost never use the word “obvious”. If it’s so obvious, then write the reason instead of the word “obvious”. Sometimes though, the truth of a statement is much easier to grasp than its proof. I think a good time to say “it’s obvious” is if the following hold (for a student who has never seen the material before): But when I read over what I’ve written above, I see so much opinion! Is there any way around this? What are your opinions regarding “obvious”?
“If Church said it’s obvious, then everybody saw it a half hour ago. If Weyl says it’s obvious, von Neumann can prove it. If Lefschetz says it’s obvious, it’s false.” - J. Barkley Rosser
It depends on the context. If I'm writing something for an audience of researchers in a given field, I don't even bother with writing "obviously", I just skip any steps they can do and proceed onto where something clever is needed. In the event something is meant to be instructive, I consider "obviously" as a stand in for, "this is an immediate consequence of the major results from earlier classes", or, "a bunch of straightforward busy work shows this". The main reason I see to use it is in the event that the details it elides would distract from the proof structure and there is no reason the audience couldn't supply those details. (Or it would needlessly waste time to cover the details) Definitely, it can be abused, but it is indispensable. In some form, or another, all proofs that aren't formal derivations from axioms assume the reader is doing some of the work between steps. "Obviously", obviously, calls attention to this, but it is almost always the case either way, it is just a matter of degree.
No, what I'm saying is that if people need every detail, they aren't ready for learning that material and, probably, need experience with earlier material; overly verbose proofs won't replace that. Exercises are where people get experience with the material they are reading, which is a different thing. In short, if someone needs every last step detailed, they would be better served by learning more than they would having every detail. *What do you consider a fully worked proof, by the way? You aren't starting from the axioms of ZFC, so what do you consider reasonable to omit?
Obviously is most often times used when things that should have been previously covered need not be covered again. Just like "trivially true" is sometimes not as trivial until we fully understand the theorem then the trivial aspect is indeed trovial. It's not so much the fault of the student in the sense they missed something but many times in regards to theorems of analysis for example as students we understand what it says and it resonates beautifully, but then we dont understand the nuances completely which is sometimes why it's hard for us to create a proof even of we can cite a dozen theorems about the problem. For example immeasurable sets took me awhile to understand due to the fact I had to really think about what it meant to be measurable and the converse to be true.
It tells them that the following consequence follows fairly directly from what has already been said for someone at the intended skill level reading the proof. Or, for a research article, it says, "you can work this out, I don't want to waste your time". As for when to use it, that's an art, like all proof writing. A proof is like a poem, it should express what you want, be as beautiful as possible, and convey its intent to its audience. There is no explicit rule for how to accomplish this, and I'm happy there isn't. If you're truly curious, look at proofs you write and look at where you omit details that seem trivial (do you spell out every step of basic algebra?), look at proofs in books, see what steps they skip between lines.
[ "How many ways are there to make a cubic trisection?" ]
[ "math" ]
[ "tjkh1p" ]
[ 4 ]
[ "Removed - ask in Quick Questions thread" ]
[ true ]
[ false ]
[ 0.84 ]
null
Pure conjecture, but there are probably a finite number of ways to trisect a cube. However, this is only one way. There are plenty of different ways to do it, they just won't fit together to form this puzzle.
For trisections like this cube puzzle, there are an infinite number of ways it could be done: If you follow this procedure, the three rays will slice the cube into three identical pieces. However, there are no limitations on the rotation done in step 3. You can rotate in either direction at any rotation rate, you can change directions, you can change the rotation rate with position, etc. If you want the puzzle to be assembleable then you can add the stipulation that the rotation rate must be constant, but even then there are an infinite number of possible values you could choose for that constant.
Yes, three identical rectangular prisms where the short dimension is 1/3 of the cube's side also works. It's not really a degenerate form, just another way that it could be done, though it doesn't have any parameters that can be varied like the corner-to-corner slicing method. Though do be pedantic you could do it along the three different axes giving you three different methods (though identical under rotation).
Is it possible that slicing a cube into 3 identical horizontal pieces or 3 vertical pieces is a degenerate form of the rules?
I think you might need to clarify what counts as a distinct "way".
[ "Extremely Counterintuitive Results in Mathematics" ]
[ "math" ]
[ "e2nf0y" ]
[ 143 ]
[ "" ]
[ true ]
[ false ]
[ 0.99 ]
I was recently asked to describe a result in mathematics that profoundly surprised me, and I thought it would be worth posting here for those interested. It's a rather advanced topic, so I'll provide some soft background so that it may be conceptually accessible to a broader audience. Almost every "object" in modern mathematics boils down to a set equipped with some extra structure (a notion of distance, operations on the set like addition/multiplication, linearity, etc.). The objects you deal with in early mathematics courses, typically open subsets of R , have a particularly rich structure to them. We can reintepret R as being a field, a vector space, a manifold, and nearly everything in-between. They have almost any property you could want, which makes sense seeing as though R is often the basis for these properties in the first place. arise from asking "how similar must an object be to R for us to retain a meaningful notion of calculus?" Or rather, what type of structure should a set be equipped with to disucss calculus. The answer is not as easily seen as the question, but the crux of it is that the object must resemble R . For example, if you were to on a circle, you would see it getting flatter and flatter. In the limit, it looks like a line--R. To get to the realm of differentiable manifolds, however, there's a hierarchy of structures that you must equip to some underlying set that (in the context of geometry) goes: Set ---> topology ---> topological manifold ---> differentiable manifold The topological structure allows one to talk about the notion of continuity within the set. The topological manifold structure is just a super nice topological structure that allows us to omit some of the weirdness that you can get in topology. Specifically, a topological manifold is a topological space that, in some sense, looks sufficiently close to euclidean space. The differentiable structure is a level beyond this--it's a topological space that looks locally , so that we can discuss the idea of differentiation and tangent spaces. An interesting question to ask is "given a topological manifold, how many differentiable structures can you add to it?" Where different essentially means that the spaces have a fundamentally different notion of 'calculus'. Even more practically, having different differentiable structures means that calculations involving calculus on one manifold cannot be used to determine calculation involving calculus on the other (despite them being equivalent as sets, topological spaces, and topological manifolds). The answer is quite surprising, and is partitioned by the of the manifold, where this dimension is given by the dimension of euclidean space (i.e. the 'n' in R ) that the manifold locally resembles. Manifolds with dimensions 1, 2, and 3 have a differentiable structure. That is, the underlying topological manifold admits a natural choice in calculus. In dimensions 5 and above, the differentiable structure is not generally unique, but there are only finitely many different differentiable structures you can have. In principle, this means that we could classify all of the different types of calculus up to diffeomorphism. In dimension 4, there are different differentiable structures you can add. In effect meaning that the notions of a topological manifold structure and a differentiable manifold structure are the most separated from each other in dimension 4. Feel free to discuss this in the comments or post your own experience with an extremely counterintuitive result in mathematics. Cheers!
On the subject of topology, the eversion of the sphere. (Though you could evert a cuboctahedron, too, and call it a geometrical property. IIRC you need to dissect the square faces into two triangles) Gödel's Incompleteness Theorem(s) is quite counterintuitive. Rough punchline: some things can never be proven, and it is consistent with logic that logic is inconsistent. It's 1am, but I'll try my best at an explanation. The sentence "This sentence is false" (Sentence 1) is not OK. It's paradoxical (can neither be true nor false). But we can throw it out, by banning all sentences that are self-referential. The sentence "The sentence obtained by substituting 'The sentence obtained by substituting x into itself is false' into itself is false" (Sentence 2) amounts to the same thing when you think about it, so it's also not OK. We can throw it out, by banning all sentences that refer to other sentences. Another, perhaps better, way is to have a level system: sentences about just numbers and other mathematical objects are level 0, sentences about level 0 sentences are level 1, sentences about level 1 sentences are level 2, etc. What I've labeled Sentence 2 has to be level n and n+1 at the same time, so we throw it out. (Note that this also takes care of Sentence 1.) However, given a set of axioms, "The string of symbols obtained by substituting 'The string of symbols obtained by substituting x into itself is unprovable from the axioms' into itself is unprovable from the axioms" (Sentence 3) is not only OK, but it's actually . First, why haven't we thrown it out already? It technically only refers to strings of symbols (axioms are rules for manipulating strings of symbols), so despite all appearances, there's no self-reference. In the above classification, it's just level 0. The reason I say Sentence 3 is is that you can express it in a seemingly harmless system like PA (the Peano Axioms). PA is, essentially, the bare minimum you'd want in a mathematical proof system. (The fact that PA can express Sentence 3 isn't immediately obvious, but the main idea is to encode strings of symbols as numbers.) So the only only way we can ban this sentence is to ban so much that we can barely say anything at all! A little thought will show that Sentence 3 (which is really a statement about strings of symbols — or, in the PA version, a statement about natural numbers) must be true but unprovable. The fact that there are true unprovable statements is Gödel's First Incompleteness Theorem. A little more thought shows that, if PA could prove that PA is consistent, then we could repeat the argument above , and PA would be able to prove that Sentence 3 is true. (Apologies for the recursion!) This contradicts the fact that it's unprovable, so the only resolution is that PA can't prove that PA is consistent. This is Gödel's Second Incompleteness Theorem. (We can replace PA with essentially any other formal proof system; it works out the same.) Another, equivalent way of saying "PA can't prove that PA is consistent" is "It is consistent with PA that PA is inconsistent". Thus, some things are true but unprovable, and therefore forever beyond the reach of logic. And if this seems similar to the proof of the undecidability of the Halting Problem, well spotted, they're very similar.
Unfortunately, I've yet to hear of any explanations. There's a plethora of technical reasons, but they rely on special features of fourth dimension geometry rather than answer Still, I can provide a vague and slightly naive but mildly satisfying analogy. Take the differential equation given by: x' = A - x Where is A is some scalar parameter. We can break this into 3 major cases: Within these respective cases, the solutions vary with A but largely retain their qualitative features. Whereas solutions in different cases have fundamentally different behaviors. The point A = 0 is called a , and marks an instantaneous transition between the other two regimes. The behavior a bifurcation point is often strange and in some cases requires techniques quite different to the other two. In a certain sense, 4 dimensional structures are like a bifurcation point that separate lower-dimensional geometry from higher-dimensional geometry.
Dimension 4 is the dimension with infinitely many differentiable structures--and it has uncountably infinite at that.
I always found it slightly unintuitive that if you’re solving a diffusion equation with a FDM, taking dx to be too small can cause instability in your algorithm and lead to blow ups.
Not only is it possible, as Weierstrass proved in the 1800s, for a continuous function to be differentiable nowhere, it's possible for an real function to be analytic nowhere .
[ "Gold Standard Textbook for an Introduction to Discrete Mathematics" ]
[ "math" ]
[ "e2i3gv" ]
[ 25 ]
[ "" ]
[ true ]
[ false ]
[ 0.86 ]
Hey everyone, I might be teaching a discrete mathematics class soon, and I was wondering if there were any amazing discrete textbooks that you knew of? I'd be looking for a book that is 1 of 3 things: Either the gold standard discrete book that is the defacto standard for learning, a discrete book with lots of parallels and examples that relate to Computer Science, or a discrete book that includes a little bit of abstract algebra (such as group theory). Thank you everyone!!
A good source for additional material I think it could be "Concrete Mathematics" by Don Knuth et. al..
"Mathematics for Computer Science" by Lehman, Leighton, and Meyer is free as a pdf and there are lots of course materials on MIT OCW to go with it.
"Concrete Mathematics" is a decent book, but the half-assed jokes in the margins of every page are infuriating. It's like the authors thought only children would read their book.
Every once in a while, there's a good one.
+1; that's the one I'd choose. Link to newest version . Certainly has a lot of CS material.
[ "Career and Education Questions" ]
[ "math" ]
[ "e2zz90" ]
[ 19 ]
[ "" ]
[ true ]
[ false ]
[ 0.92 ]
This recurring thread will be for any questions or advice concerning careers and education in mathematics. Please feel free to post a comment below, and sort by new to see comments which may be unanswered. Please consider including a brief introduction about your background and the context of your question. Helpful subreddits: , , ,
How important is it to mention in my personal statement that I mention things faculty are doing that sound interesting to me? I am aware of the names of the areas within the field and what kinds of questions they are researching, but not much more than a surface level understanding.
Undergrad research isn't important on its face, what matters the most are letters of recommendation. The problem is that if you don't have any extracurricular work, your letters of recommendation probably won't say anything special about you. But you don't need to do research to do extracurriculars. You can do a 1-on-1 reading course with a professor. Maybe ask about their research. Tell them you want experience reading a research paper and you want to get a better feel for what kinds of math people do research on. You don't necessarily need to do the researching yourself. You can also do relevant volunteer work. For instance in the US, you could maybe volunteer at a math camp: http://www.ams.org/programs/students/emp-mathcamps
I would not read this well on an application. I think it would both draw attention to the fact that you did poorly and sound like you are making excuses.
I’m an applied math major who can either take an abstract algebra course on groups+rings or point-set Topology before graduation since I started late. I plan to learn both regardless. Which topic should I learn on my own—Algebra with Dummitt and Foote, or Munkres’ Topology? In my school abstract algebra isn’t taught by tenured faculty, just postdocs. I am interested in Dynamical Systems, Probability/Stochastic Processes, and PDEs. I’d consider applying to Stats or applied math PhD programs.
Fyi calculus 2 as in integral calculus (methods of integration, disk/washer, polar/parametric, sequences and series ect) is not remotely close to being the hardest class math majors take.
[ "Question from my daughter assignment" ]
[ "math" ]
[ "tj91lr" ]
[ 0 ]
[ "Removed - try /r/learnmath" ]
[ true ]
[ false ]
[ 0.25 ]
null
/r/learnmath is better for homework questions
From the sidebar: Homework problems, practice problems, and similar questions should be directed to /r/learnmath , /r/homeworkhelp or /r/cheatatmathhomework . Do not this type of question in /r/math .
From the sidebar: Homework problems, practice problems, and similar questions should be directed to /r/learnmath , /r/homeworkhelp or /r/cheatatmathhomework . Do not this type of question in /r/math .
From the sidebar: Homework problems, practice problems, and similar questions should be directed to /r/learnmath , /r/homeworkhelp or /r/cheatatmathhomework . Do not this type of question in /r/math .
From the sidebar: Homework problems, practice problems, and similar questions should be directed to /r/learnmath , /r/homeworkhelp or /r/cheatatmathhomework . Do not this type of question in /r/math .
[ "Why is calculus still considered advanced mathematics for average people nowadays who were raised in a modern society, given it was invented in the 1600s? After several hundred years, shouldn't the mastery of calculus have reached the mastery of arithmetic which we expect of average people nowadays?" ]
[ "math" ]
[ "tj6470" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.31 ]
null
...You realize each new person starts from the bottom right?
Because average people don't get enough education. Public education is not well-funded enough to get beyond calculus or to teach enjoyment of math in fewer than 13 years of school.
I'd like to kick your superior attitude right in the calculus.
Oh look he's about to say his first word!! d/dx 1/x = - 1/x Awwww so cuteeee! 😍😍
Calculus is also the most advanced class a layperson may take. "Average" is also relative. Many education systems do not even provide the resources to easily reach calculus levels.
[ "I love math but have no idea where I should go to school" ]
[ "math" ]
[ "tj2i07" ]
[ 6 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
null
The only universal thing I can say is that you should learn math using English. There is a much larger variety of English textbooks of much higher quality for undergraduate topics like analysis, algebra, etc. If you're not fully comfortable in English, you can try to learn early undergraduate topics like calculus simultaneously in English and your native language. English is also the language of math research in today's world, so you will need to read English if you ever want to understand what other researchers are doing and write English if you want other researchers to understand you. Other than that, what schools you should apply for depends on your career aspirations, financial background, current level of knowledge, and many other things. If you are curious about schools in China in particular, I'm not sure that reddit will be very helpful because most users here are from the West.
Finance and computer science are both great fields to go into after undergrad, but you should look into more than a math degree for those. You should look at combined programs like "financial mathematics," "mathematics of computation," or at least applied math because a pure math degree alone will not prepare you for jobs in those fields. If you can join clubs, take electives, or find internships in finance or CS during a math degree, it might be okay to do a pure math degree. Pure math majors without any industry background usually go to grad school out of undergrad. I've heard that some schools in China do teach upper-division classes in English, or at least using English textbooks. I wouldn't be too concerned about it. Good luck!
I actually graduated from a English high school and studied math until a calc I level(?) some calc II stuff was also covered in English . I have tried self studying some of the undergraduate topics like you mentioned and found them incredibly fascinating. Yet I want to study in china as I love Chinese culture and want to learn part of my heritage(one of my parents is Chinese). I would like to go into a quantitative field in finance but don’t want to be limited by that. My main aim is to achieve financial freedom as early as possible such that I have time to travel, and I believe finance would be the best way to achieve that? Together with my inclination in math. Any more advice would be greatly appreciated! Ps I have already decided which university I want to go to in china but I’m just considering my options in English speaking countries too.
Thank u so much for your advice! That’s awesome!
China...
[ "What would be better to go along with a CS degree, pure math or applied?" ]
[ "math" ]
[ "tizayi" ]
[ 1 ]
[ "" ]
[ true ]
[ false ]
[ 0.6 ]
null
Unfortunately, your submission has been removed for the following reason(s): Career and Education Questions /r/mathematics /r/matheducation If you have any questions, please feel free to message the mods . Thank you!
Im interested in computers and computer science and also programming. Along with that I have a passion for math. I guess i should have asked what field would be the best way to combine these interests in potential jobs. Physics is also pretty fun as its very close to math, I know that favors applied math.
Im interested in computers and computer science and also programming. Along with that I have a passion for math. I guess i should have asked what field would be the best way to combine these interests in potential jobs. Physics is also pretty fun as its very close to math, I know that favors applied math.
It depends on your goals and what you want to achieve in your carrer.
I have a master's in applied mathematics and I work as the primary web developer/ coder for a small company. The company performs analysis on stocks and I am in charge of ensuring the algorithms are accurate and efficient. I regularly use statistics, linear algebra, and non-linear optimization. I have also written mathematical proofs for some of the more complex algorithms. In my undergrad experience (at a smallish private university where I minored in computer science) individuals that are good at math and can translate some of the more complex processes into computer algorithms are rare and valuable. I would say an applied math degree or a statistics degree would work well with CS. Especially if you can take a CS course that focuses on translating math equations into code.
[ "How do I get better at making algebraic equations for questions?" ]
[ "math" ]
[ "tiy4ap" ]
[ 8 ]
[ "Removed - ask in Quick Questions thread" ]
[ true ]
[ false ]
[ 0.79 ]
null
Try to figure out the underlying logic of the problem. Be willing to turn their words into your words. Then see if you can say something that expresses that thought exactly. If the logic feels a bit sketchy, try a different way of expressing the problem. Do this until one feels like it explains the problem clearly and in a way the shows the equality between two aspects of the problem. That may be your equation.
You mean like word problems? Use key words like “of” “is” and “each” to translate to algebra
Stephanie is registering her son for swim lessons. There is a one-time $45 registration fee plus a $20 fee per lesson. Stephanie paid a total of $225. For how many lessons did Stephanie register?
The first step is to figure out the relevant quantities, which is usually given by the question. Here, it doesn't matter that Stephanie is called Stephanie or that the lessons are for her son and not her daughter, for instance. The relevant quantities are the fees and the number of lessons her son takes. Once you've determined the things that actually matter, it's time to give them names. The only things requiring a "name" here is the number of lessons and the total cost, so let's use n and P(n) to denote those, respectively. There's no need to "name" the fees, because we're given hard figures for those. Now you'll want to start building an equation. P(n) is the total cost, so it should be comprised of the cost per lesson, the number of lessons, and anything extra. The "anything extra" refers to the one-time registration fee in this case, which is $45. P(n) = 45 so far. Next, we're told that the cost per lesson is $20. If that's the case, then two lessons would cost $40, three lessons would cost $60; in general, n lessons would cost 20n dollars. We're almost there. P(n) = 45 + 20n. We're told that this is equal to $225. We can therefore substitute this for P(n) in the equation to get 225 = 45 + 20n. From here it's just straight algebra: subtract 45 from both sides and divide the result by 20. You'll end up finding out that Stephanie's son took 9 lessons. tldr: first, determine what it is the question's asking for. Any of them that don't have hard figures, denote them with a symbol. Then use the given information to construct an equation (or equations) - you'll want to become familiar with the longhand way of describing arithmetic operations ("sum"/"total" means to add, "difference" means subtraction, etc). The most important part, I would say, is to be able to determine what's relevant and what's not.
Not trying to pick on your answer, but this is absolutely not the right way to learn. It's only justified if you're just incapable of any other way, and need to maximize how well you score on a test without actually learning anything. This habit is so destructive that when I see it in a student, I know they are probably lost and confused, and I'm in for a really long and difficult path to get them to the point where mathematics makes sense to them again. Teachers really need to get in the habit of deliberately writing word problems that use these ridiculous "key words" in ways that don't imply the operation that other lazy teachers have taught students to jump to. That way, they can accurately assess which of their students actually understand what they are doing, and which of them learned shallow tricks to game the system. Like "At a store where t-shirts are $15 , Ginnie bought a t-shirt and sandals, gave the clerk $30, and got $8 back. How much do sandals cost?" After all, you cannot teach what you cannot assess, and if poor teachers are going to coach students in gaming the assessments, then better teachers need to write better assessments that can see through the tricks.
[ "Can a set contain itself? Why or why not?" ]
[ "math" ]
[ "tjll4b" ]
[ 1 ]
[ "" ]
[ true ]
[ false ]
[ 0.6 ]
[deleted]
depends on what you mean by set
The rabbit hole goes deep. For all practical purposes (unless you dive deep into graduate/research level mathematical logic) then standard mathematics is based on a list of axioms called ZFC. These don't allow for sets to be elements of themselves. But then there can be alternative mathematical constructions which allow for different things and things get massively complicated. A simple easy to understand question related to this is "does there exist a set which has strictly more elements than the natural numbers and strictly fewer than the real numbers?". This is called the continuum hypothesis and its answer might surprise you in that it is not a yes/no question. Everything in math depends on what axioms you want to start with. In short, unless you more or less plan on specializing in mathematical logic, then the answer is: No. Such sets don't exist, we cant have sets including themselves as elements. Read up on Russell's paradox, quite interesting stuff.
https://en.wikipedia.org/wiki/Axiom_of_regularity
For those looking to dive into more details, I recommend reviewing restricted vs. unrestricted comprehension. As mentioned, generally, modern axiomatic set theories do not allow unrestricted comprehension as this leads to paradoxes.
From a set theoretic perspective, “numbers” are sets.
[ "Economic Input-Output Models as Graph Networks: Exploring conceptual analogies" ]
[ "math" ]
[ "tjjj7t" ]
[ 69 ]
[ "" ]
[ true ]
[ false ]
[ 0.94 ]
null
Its a bit of niche topic, I hope there are aspiring economists with a mathematical bend or mathematicians with an interest in this type of economic input-output model that might be intrigued and dig deeper.
What would you like us to do with it? I work in international trade, and it is super common to use the Leontief inverse on an input output table in order to figure out gains from trade. See here for example this summary paper from a few years ago, section 3.4 is the part where they talk about input output linkages: https://economics.mit.edu/files/9960
Umm, useful reference, thanks for sharing. Its quite a big model but the input-output part seems like a classic approach in its use of the leontief inverse. Some techniques that go deeper into the graph theoretic part go by the name of qualitative and structural path analysis. Essentially various ways of decomposing or characterising the connectivity structure between sectors and regions. I am not familiar with gravity models but assuming it is a way of injecting a spatial scale into economic systems (through transport costs etc) it points to interesting directions where the graph of linkages gets some sort of metric associated
That sort of thing exists in Economics too. Here is a well-cited paper which shows how particular network structures lead to shocks to small numbers of firms propagating into aggregate macroeconomic fluctuations: https://economics.mit.edu/files/8135
This feels more relevant to supply networks from a production & manufacturing perspective than a macroeconomic perspective. The macroeconomically relevant analog would probably be computable equilibrium modeling.
[ "Could we envision a complex number space in which i² does not equal −1?" ]
[ "math" ]
[ "tj9zi7" ]
[ 6 ]
[ "" ]
[ true ]
[ false ]
[ 0.61 ]
null
You need to better formulate your question. What exactly do you mean with “complex number space”? The complex numbers C can be thought of as a two dimensional real vector space R with an added “complex structure”. Up to isomorphism i believe there is only one way to have a complex structure on R
Yes, consider the split-complex numbers https://en.wikipedia.org/wiki/Split-complex_number where the unit circle becomes the unit hyperbola.
Let f:R[√-1]->R[√-4] be given by f(a+b√-1)=a+(b/2)√-4. Then f is an R-algebra isomorphism.
Michael Penn did a good video recently about the dual numbers: https://youtube.com/watch?v=ceaNqdHdqtg
Or the dual numbers.
[ "advice regarding aime?" ]
[ "math" ]
[ "tjoglh" ]
[ 3 ]
[ "" ]
[ true ]
[ false ]
[ 0.71 ]
I’m currently a freshman, and I’ve been doing AMCs since middle school. I basically didn’t prepare at all except for maybe doing 4-5 past contests, and I barely made honor roll when I took the AMC8. However, I wasn’t as lucky with AMC10. I took it twice, and both times I was ~20 away from the cutoff for AIME(still with no prep). I’m wondering if it’s possible to make AIME through this year’s AMC10 if I start preparing now? I’ve heard so many different things, some people are saying that AIME isn’t difficult and you just have to do a bunch of past contests, while others are saying that I need to do the aops books and learn the concepts and that it’ll be difficult to make AIME with only ~1/2 year of prep. So, should I try to qualify for AIME or is it a waste of my time? I feel like it would be very beneficial for college apps, especially because not a lot of girls at my school do competitive math, but I also don’t want to spend time on it if the probability that I’ll make it in is too low. Also, what resources should I use if I want to prepare? I know past contests are good, and lots of people also recommend going through aops books, but I don’t know which book to get. Also, how beneficial would it be to take classes? I remember I hated taking math classes when my mom signed me up for them back in like 4th/5th grade but if it can actually help me understand the concepts better I’d be willing to give it a try. Any advice would be appreciated. Thanks.
I also don’t want to spend time on it if the probability that I’ll make it in is too low. If you require immediate success as your only acceptable reward, you'll probably be disappointed. Don't waste your time. I can't relate to this mindset at all. I studied and competed in AMC (then AHSME) because it was fun and challenging. I missed AIME 9th and 10th grade, made it in 11th, and then in 12th I got to take the USAMO. I then proceeded to score a zero on that USAMO. Did that make it a waste of time? Hell no. How can a number like a score make math into a waste of time? But that's me. You don't seem that excited about it, so why do it? Trust me, life is too short to spend your time doing shit you don't like.
Hmmm… I faintly remember there’s some quote along the lines of “when a metric becomes a goal, it loses value as a metric.” I think you can make AIME; I think anyone can if they out their mind to it, but making AIME isn’t going to help you much for college. You’d have to perform well on the AIME and qualify for USAMO for it to be useful in my opinion. You’re better off just doing well on the act/sat for college, or setting your sights higher at the USAMO level.
How about this: Want something in life. Want something enough that you're willing to work at it even when it's not fun and there's no guarantee of immediate success. If that's math, great! If it's something else, great! Want some fucking thing and work toward it no matter how long it takes. Better yourself, not in ways that you calculate that colleges will like, but ways that matter to you for some actual fucking reason. Is that better? (Also OP is probably not a "he".)
(because paragraph 3)
No, just, no. Doing things you don’t like is necessary sometimes, or no one would ever graduate high school or for that matter manage to feed themselves. I find platitudes like “only do what you like” as annoying as their opposites (“only do what is practical”). I think OP should study for the AIME, if for no other reason than that the calculating skills he’ll practice are likely to help him in the future. Learn math while your brain is still highly plastic, OP.
[ "Best books for linear algebra?" ]
[ "math" ]
[ "tjkpsp" ]
[ 43 ]
[ "" ]
[ true ]
[ false ]
[ 0.92 ]
I teach game dev, and one area that I’ve started helping out in is graphics programming. While I can hack my way through shader writing and transformations, I’d really like to close the gaps in my knowledge, both for myself and also so I can contribute more to that side of teaching it, including being able to better explain concepts to students. Can anyone recommend some good books on the topic? Like I say, it’s both for my own development, but also to help get my students excited about an area that often irks them.
Since you're an engineer/programmer, I'd recommend something like Strang or Linear Algebra Done Wrong (the title is a pun on the LADR mentioned below and is more applied in nature) or a myriad of other "applied" linear algebra books. Anything more theoretical than that (like LADR, Friedberg, Hoffman and others) would probably lead you too far astray.
I'm not sure Linear Algebra Done Right is a good recommendation for game designers. I imagine game developers would be a lot less concerned about developing linear algebra coordinate-free and more concerned with matrix computations than Axler is. Also, people who aren't familiar with proofs also tend to struggle with the book; it was very unpopular with the engineers the year they used it in the undergrad classes!
The "linear algebra done right" army invading the comment section, (great book though, a masterpiece)
Seconding Strang or LADW, I studied from Strang at an engineering school and LADR in preparation for math grad school. The perspective is different, and more applied in Strang.
For computer graphics it's very beneficial to take a geometric stance, i.e. treat linear algebra jointly with affine and projective geometry. It's a bit hard for me to gauge where you are at in terms of level, but a very easy introduction would be: Janke "Mathematical Structures for Computer Graphics" An exciting but advanced text would be: Gallier "Geometric Methods and Applications" Inbetween these texts are usually more accessible treatments of projective geometry. I recommend the recent: Richter-Gebert "Perspectives in Projective Geometry"
[ "How did early mathematicians derive the formula for calculating the area of a circle, ( A = Pir^2 ), without calculus?" ]
[ "math" ]
[ "tjm0aj" ]
[ 340 ]
[ "" ]
[ true ]
[ false ]
[ 0.95 ]
null
A standard calculus approach to calculating the area of a circle would involve approximating the circle with rectangles that have parallel sides. Archimedes did so by approximating the circle with triangles sharing a tip at the circle’s center. In both cases, the fact that we know the area of rectangles already is what allows for an area approximation. Archimedes essentially did integral calculus. He made a scheme of approximations and took a limit. The rectangles approach standard to integral calculus (Riemann integration) is more generalizable to shapes given by graphs of continuous functions, but the approach is essentially the same. In the case of the circle, the fact that archimedes beat out calculus by hundreds of years should show that his triangle approximations were actually the simplest way to go.
Well it is kind of easy to see that the area of a circle is proportional to r^2 so you only need to determine the constant of proportionality. And it is also intuitively clear that the circumference of a circle is proportional to its diameter 2r. The real surprise is that the constant is the same. You can see it intuitively by dividing a circle into a large number of triangles. The area of a triangle is base * height / 2, so if you sum this up over all triangle you will find area = circumference * height / 2. As you get more and more triangles, the height approaches the radius and Area = Circumference * R / 2.
As usual with math, intuition comes first, rigour comes second
That was long after the formula had been discovered
they're both distances, so scaling would scale any distance by whatever factor you choose.
[ "Noether's Theorem and conservation laws - recommended books?" ]
[ "math" ]
[ "tj7ks3" ]
[ 41 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
Hello I have a hankering for studying up on Noether's Theorem and how it's relevant for conservation laws in field theory/physics. I have a math degree but I didn't take any differential geometry and haven't studied Lie algebras. I have a strong analysis background and good exposure to algebra (up to and including Galois theory) so I don't need a book that covers those topics. Suggestions?
I believe Arnold’s contains what you’re looking for. It discusses Lagrangian mechanics on manifolds in chapter 4, including Noether’s theorem.
Ana Cannas da Silva's symplectic geometry notes cover Noether's theorem in one of the later chapters. But these notes are no substitute for an introductory differential geometry textbook.
I have never really read an introductory differential geometry textbook. What I did was this: Make the mistake of enrolling in a symplectic and Kähler geometry without having anywhere near the required background. (I was still an undergradate!) Learn differential forms on demand by seeing how the professor used them, and have a bunch of eureka moments like That being said, other people have recommended these two books to me:
Both of those books are good, OP. Tu is more focused on just covering the basics of manifold theory while Lee is bigger, but Lee is the one that has material on Lie groups and Lie algebras so is probably more appropriate.
Rec a good one of those? Don’t mind it if I end up getting 2-4 texts in total.
[ "A differential topology ρroblem" ]
[ "math" ]
[ "tj6ftv" ]
[ 3 ]
[ "" ]
[ true ]
[ false ]
[ 0.71 ]
Suppose M is a connected 2-dimensional smooth submanifold without boundary of R containing two points (x, y, z1), (x, y, z2). Does it follow that there exists a point p in M such that the Jacobian of the projection of M onto the x-y plane has non full rank?
Consider the universal cover of the open annulus in the x-y plane embedded as a helix rising in the z-direction in R
This would be a counterexample if boundary was allowed i believe. Forgot to add boundaryless as a condition.
Yes, the universal cover of the open annulus is a manifold without boundary diffeomorphic to R
Oh, open annulus. It embeds as a manifold without boundary yes?
Amazing that you can get it diffeomorphic to R as well.
[ "Does anyone have a use for the first billion numbers and their multiplicative persistence?" ]
[ "math" ]
[ "tj2j65" ]
[ 21 ]
[ "" ]
[ true ]
[ false ]
[ 0.77 ]
I left my raspberry pi running and it found the multiplicative persistence of the first billion numbers🤷. Do any of you have a use for them?
This should take less than a minute (and likely orders of magnitude less) to recalculate. Like lists of primes, it's easier to recalculate than to save. Check out Project Euler for more interesting problems.
Nope. Please do not even try to upload this to OEIS. It will be rejected. There is already a b-file with 10000 terms. This is the limit for most sequences, especially if the terms are trivial to compute.
A031346 : Multiplicative persistence: number of iterations of "multiply digits" needed to reach a number < 10. 0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,1,1,1,1,... I am OEISbot. I was programmed by /u/mscroggs . How I work . You can test me and suggest new features at /r/TestingOEISbot/ .
But also, maybe not worth the review time. Sloane was able to check up to 10 with some clever tricks: http://neilsloane.com/doc/persistence.2.jpg
For those playing along at home: this is a good, fairly simple exercise with dynamic programming (like many other programming and Project Euler problems).
[ "Elliptic Regularity with mixed boundary conditions" ]
[ "math" ]
[ "tiu9vk" ]
[ 16 ]
[ "" ]
[ true ]
[ false ]
[ 0.94 ]
Hello! I am looking for any references that discuss the elliptic regularity (mainly the Poisson equation) with mixed boundary conditions Dirichlet-Neumann. Can you point out anything related you know about, please?
Try Elliptic Problems in Nonsmooth Domains . Chapters 4 and 5 cover boundary value problems in polygonal domains with mixed boundary conditions. See if you can access the PDFs through your university's library. Feel free to PM me if you can't.
Maybe this will be of interest to you, I found it randomly some times ago and, I haven't read the details but it seems to describe the junction between the two boundaries, basically saying that it locally looks like a polynomial of degree n+1/2 at the boundary.
I don't know much but when I see these words the book "Nonhomogeneous Boundary Value Problems and Applications" by Lions and Magenes come to my mind.
The main idea is the Almgren monotonicity formula, which is discussed here (for harmonic functions on which it is a bit overkill). This is a quantity N(r) that depends of u on B_r, that is approximately N if the first term in the developpement of u is of order N, in other word it detect the natural degree of homogeneity of u near 0. What makes the quantity N(r) particularly useful is that it is monotonous, so it has a limit when r->0 ; there is a single N(0) that is the right homogeneity of u near the origin. ​ This means that it is natural to look at the rescaling u_r(x)=u(rx)/r^{N(0)}, and this is more or less what they do. Then they are able to say that the limit of this as r->0 exists and is non-trivial using uniform Hölder estimates, and then the N(0)-homogeneous solutions can by computed by hand, it turns out that N(0) must be of the form n-1/2 (which corresponds to the eigenvalues of the Laplacian of a segment in 1d, with Dirichlet boundary condition on one side and Neumann on the other). ​ In the references they give a few papers that do the same thing on a less general problem, perhaps the idea will be clearer on these.
Thank you. Actually, I have checked and rechecked it and it doesn't seem to treat the problem that I am addressing.
[ "What Are You Working On? March 21, 2022" ]
[ "math" ]
[ "tjf3ti" ]
[ 30 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
This recurring thread will be for general discussion on whatever math-related topics you have been or will be working on this week. This can be anything, including: All types and levels of mathematics are welcomed! If you are asking for advice on choosing classes or career prospects, please go to the most recent .
For once, absolutely math related. This is because I successfully defended my PhD dissertation on March 11 (my 30th birthday, no less)! By a massive stroke of luck, the outside committee member—a professor of electrical engineering—happened to be aware of the p-adic methods from mathematical physics which are most closely related to my approach. I'm still waiting to hear back from him, seeing as he told me that he had additional questions for me about my work. Apparently, my number theoretic work on Collatz-type dynamical systems might have applications to quantum (de)coherence and quantum computing. Also, my defense presentation itself went over swimmingly, which is great, since I'm going to be presenting that same material at a talk in May for the special session of the western AMS regional conference. :D In terms of what I'm actually at the moment, other than TA-ing for introductory real analysis on Thursdays, I'm getting back into working on my novel.
Working on some small diagrams (just kidding I'm working on absolutely giant diagrams)
We just proved the sylow theorems in my introduction to abstract algebra class, and my topology class's writing assignment is introducing us to homology which I'm still not quite sure what it is yet :)
Trying to understand the connection between homologies (homologous functions) and Ricci flow. Any resources mentioning both in a similar context would be appreciated.
Trying to survive my courses and taking a deeper delve into number theory and ring theory following Stillwells Elements of Number Theory. These are topics I find very interesting particularly since I am half cs student so their intersection is something I find quite nice. At this exact moment I’m trying to convince myself to get off Reddit and study for my computer architecture midterm. Edit: I’m like 75% sure I failed that midterm. But at least it’s done now so I can go do math :)
[ "Co-author plagiarized parts of the paper introduction and exposition. What to do besides apologizing and recalling paper?" ]
[ "math" ]
[ "tj1rw8" ]
[ 570 ]
[ "" ]
[ true ]
[ false ]
[ 0.96 ]
A little background. Another researcher reached out to us to notify us about copied text in a recent paper we submitted a while ago. It turns out the first author on the paper (I am further down) copied explanatory text word for word from this other paper. I had no idea. I am early in my career and really worried that this will hurt me. What should I do, besides the obvious two things of Do I need to inform the institution where I work as well? This is an esoteric field of pure math so not a ton of people reading it but still. update: so someone was claiming to be the co author and posted a response. they went so far as to show this entire post to their academic supervisor as proof of somehow trying to cover my tracks. Ironically, this situation had nothing to do with that and was a completely separate incident. I came here for advice but I would highly advise against taking anything in any reddit forum as pure fact. I would also urge everybody to keep a paper trail using version control. This way you can see who made the problematic sentences. Also, funnily enough, I guess this happens more often than one would think.
I think you should notify your immediate supervisors and let them make the decisions. You don't want to hide anything. It might not be as big a deal as you think.
I worked in healthcare and social science research. Our research director would notify her superior and the local institutional review board. In this case you would notify your work supervisors and any other source of funding for the research. It is one bad actor who may have made an oversight, certainly of not citing a contributor, but of maybe forgetting to alter what he clearly meant to say in his own words.
Thank you. We already sent an apology ail to the editor and will notify management asap.
Yeah I don't think they were trying to be deceptive but it was certainly not appropriate. Just to add though, Isn’t it obvious to not cut and paste text? You can’t even do that with your own past work without citing. And finally just to vent, the paper was good work. But now I see no other alternative but to just scrap it. I wouldn’t even trust a resubmission.
Ask someone up the chain about it. They could probably help you the most.
[ "Good resources to learn combinatorics?" ]
[ "math" ]
[ "tiyynp" ]
[ 32 ]
[ "" ]
[ true ]
[ false ]
[ 0.92 ]
I have an undergrad degree in applied math/statistics however I never took a course dedicated to combinatorics (just learned what was required in prob/stat) but I feel like I want to learn some more serious stuff. Are there any good recommendations for combinatorics educational materials? Any textbooks you all recommend? Since I'm reading these for the first time I usually have a preference for textbooks with a more informal writing style.
A Walk Through Combinatorics by Bona A Course in Combinatorics by van Lint
I also recommend the Bona book. Also, u/DaKing410 a few more recent books (listed in order of my (personal) opinion of whether they're good books or not): by N. Loehr (2nd edition, 2017). by P. Mladenovic (2017). all with solutions* by T. Andreescu and Z. Feng (2004). by K. Bogart (author died in 2005, posthumously finished and published in 2017). Though, there might be better intro textbooks out there now. (You might be able to tell that I've haven't looked since 2017) Also, note that all of these are introductory books - if you want to go more advanced later, you might want to look at books by Richard Stanley, P.J. Cameron, etc.
I was trying to remember — the Bona book is great. It’s the one with graph theory right?
Here is a intro combinatorics textbook with an informal writing style that I like: Applied Combinatorics by Keller and Trotter, https://www.appliedcombinatorics.org/book/app-comb.html Despite its name it is really still on the theoretical side. It is mostly what a "first course in combinatorics" would cover and goes a little bit further from there. It has a section on applications to probability you may like.
By that I mean the types of topics that would be covered in a course called 'Combinatorial Analysis' or something similar. There was one offered at my university and I've seen similar titles elsewhere.
[ "Serious mathematics books that are well motivated with clear writing" ]
[ "math" ]
[ "tj3ysx" ]
[ 167 ]
[ "" ]
[ true ]
[ false ]
[ 0.95 ]
Mathematicians have a bad reputation when it comes to teaching mathematics to people who have no idea of how the field works and are trying to enter it. That is a reason why a huge portion of youngsters hate/fear mathematics in general. Sometimes they even change their major at college or even dropout. Similar is the reputation of math textbooks being dry, non-motivating, etc. Most books assume that you are already a mathematician and straightaway starts with a set of abstract definitions. They present no motivation for the topic at hand. I am sure there are books that refute this notion and are excellently written and well motivated. are one such example. As you may have guessed by now that I am not talking of popular books on mathematics. I am talking about serious books which make you a better mathematician in that domain after you have through them. Do you know of similar such books?
Primes of the form x + ny by David Cox motivated me all throughout grad school.
Some examples of great and clear writing: The last one mostly for the exercises, since the chapters themselves are pretty terse, which maybe means it isn't quite what you're looking for.
Lots. Klaus Janich, Katok and Hasselblatt, Peres and Mortens, Baldi, Rogers and Williams, Oliveira and Viana, Jost, Thurston, Le Dret, Milnor, Arnold, Gromov, Anything by Tao
Lee's trilogy on manifolds
Spivak's A Comprehensive Intro to Differential Geometry is extremely well motivated and very entertaining. For example, let's say you are about to take a second course in algebraic topology which is centred on characteristic classes (say from Milnor) then chapter 13 in volume 5 called, 'The Generalized Gauss-Bonnet Theorem and What It Means for Mankind' makes for outstanding preparation. Spivak presents what he calls the 'proof by magic' and then spends the rest of the chapter under the hood, connecting (sorry) characteristic classes and curvature. Lots of geometry instead of alg top. Also a shout out to Marcel Berger's monumental A Panoramic View of Riemannian Geometry. Not really a textbook, could be thought of as a 'Princeton Companion to Riemannian Geometry', but more advanced than the original PCM and by one of the outstanding geometers of the latter half of the 20th c. The amount of insight this beautiful book has is astonishing.
[ "Calculus 2" ]
[ "math" ]
[ "m8pm1n" ]
[ 1 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 0.67 ]
null
If you take differential equations, it will be useful there, and in partial differential eqns .
In calculus 2 you have probably studied about series, sequence etc? If it is the case, then you will study about functions in 3 dimension in multivariate calculus. You will need the basic concepts from cal 2 and 1 to extend the concepts into three dimensions. If you are not comfortable with cal 2, I highly encourage you to give it some time. Good luck
Any time you need to add a bunch of things up there is a pretty chance you can and should use integration. I think it’s more used in concrete sciences like physics but I know Rigorous statistics uses integration often
Like what the other comments are saying, it is ambiguous what you mean by calculus 2. I am assuming calculus I is essentially limits & differential calculus of scalar (from one variable to another) real functions (maybe also some integral content) & calculus II is Sequences & Series of both real numbers & scalar real functions & integral calculus of scalar real functions. Both these classes will be important for Multivariable Calculus since you extend these notions considering functions of vector variables (multiple variables) to scalars like w/ partial derivatives (in turn bringing in functions from vector variables to vector variables) & directional derivatives & integrals over vector-valued domains & perhaps you will discuss some multivariate Taylor series (I'll honestly admit that probably the Sequences & Series content is not that emphasized in multivariate classes, but you definitely need the limits, derivatives & integrals down for scalar functions since you will generalize those as above). This material is also relevant for subjects like DEs, Probability, statistics & Fourier series as others mentioned & other quantitative subjects (perhaps covering other fields like physics, biology, finance, computer science, etc).
You are misunderstanding what people are telling you about multivariable calculus. Some students think Calculus 2 is much harder than Calculus 3 (multivariable) and others think Calculus 3 is much harder than Calculus 2. Each of those courses has its own tough topics: Calculus 2 has infinite series, which confuse students who never figure out what convergence of an infinite series really means. Calculus 3 has multi-dimensional integrals and the whole panapoly of 3-d geometry that often confuses students who've never done math beyond 1 or 2 dimensions before. Calculus 3 courses do not include anything about infinite series in higher dimensions (these certainly exist, but the Calculus 3 course and textbooks don't treat them). What people really mean when they say Calculus 3 doesn't have "a lot" of Calculus 2 concepts is that it doesn't have anything about infinite series. That's it. Calculus 3 is full of derivative and integral concepts in several variables (partial derivatives, tangent planes, double and triple integrals, and generalizations of the Fundamental Theorem of Calculus to multivariable integrals). So it is completely wrong to say Calculus 3 doesn't use Calculus 2. It only doesn't use the series part of Calculus 2. The rest of Calculus 2, particularly anything involving integration from Calculus 2, absolutely is used in Calculus 3. All parts of the math in calculus (not necessarily applications from those courses) appear in later math. It all depends on the direction in which you will go.
[ "Web developer very basic math knowledge here. Want to start discrete mathematics. Is basic algebra enough?" ]
[ "math" ]
[ "m8oyhe" ]
[ 8 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 0.9 ]
null
Discrete mathematics courses tend to include fairly sophisticated topics. You absolutely will need a strong foundation in arithmetic and basic algebra. More than that, you need to have developed some level of what we call 'mathematical maturity.' As you study mathematics, you begin to develop a better sense of how to approach mathematics. This, more than any one topic, is the prerequisite for a usual discrete mathematics course.
You absolutely will need a strong foundation in arithmetic and basic algebra. More than that, you need to have developed some level of what we call 'mathematical maturity.' This, more than any one topic is the prerequisite for a usual discrete mathematics course. Yep. Unlike, say (non-Real analysis) Calculus and whatever goes for Algebra 2, it is hard to point at a specific course (apart from elementary arithmetic and algebra) as something that one should have mastered before Discrete Math. It is relatively self-contained, any decent book on DM introduces just as much Logic as is needed, just as much Graph Theory (or Counting or wtv) as is needed. It is also deceptive in that productively tackling it and learning ways of comprehending it is less about specific material mastery and more about grokking how to master this sort of material, which just comes with chewing on as much math as possible.
in many programs discrete mathematics is the course to develop mathematical maturity though
I have never taken a Kahn class. I did take a Discrete Math class in college with a limited math background. You do not more than algebra but, depending on the class, instructor, department you might spend a fair amount of time picking up new terms and ideas. If you have to pay for the Kahn class, I recommend looking up discrete math videos on youtube. Some universities have put them online. Watch several and you can decide for yourself.
I'd suggest you just try to read the book and see if there is anything that confuses you or terminology that you don't understand (that isn't covered in the book).
[ "Advice for slow math guys?" ]
[ "math" ]
[ "m8p1tt" ]
[ 1 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 1 ]
null
I try to practice whenever I can. The problem is my speed makes it take like an hour to memorize the steps to solve a single question, and then my other college courses come in which I have to attend to as well. People can get caught up rechecking over and over again to look for mistakes, if this is you establish a regime of how long you will spend checking an answer/working (say 15s max if no standout errors) I'll try and incorporate this. One thing i'm thinking of doing is looking at the solution of a question beforehand, and then just repeatedly solving that same question under a timer to see how fast I reach that solution. The problem is when I meet a new question I get slow again and also burnout.
I try to practice whenever I can. The problem is my speed makes it take like an hour to memorize the steps to solve a single question, and then my other college courses come in which I have to attend to as well. People can get caught up rechecking over and over again to look for mistakes, if this is you establish a regime of how long you will spend checking an answer/working (say 15s max if no standout errors) I'll try and incorporate this. One thing i'm thinking of doing is looking at the solution of a question beforehand, and then just repeatedly solving that same question under a timer to see how fast I reach that solution. The problem is when I meet a new question I get slow again and also burnout.
Okay, there's tons of problem on the internet so this should be easy to start.
Okay, there's tons of problem on the internet so this should be easy to start.
How long did you study for your exams? And what classes are these? For classes like Calc, I find that if I can run through dozens of examples in the days leading up to the test, I'll know what to do for a problem by reflex. Meanwhile classes like Discrete wrecked me because a lot of problems aren't as general as they are in calc. I guess "study more" is a pretty obvious answer lol, so sorry if this isn't really helpful. Edit: If you'd like I can go into detail about how I study. It's been successful for me so far, 3.8 GPA.
[ "Elementary Number Theory or Independent Study in Algebraic Number Theory?" ]
[ "math" ]
[ "m8lg0i" ]
[ 1 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
[deleted]
I don't think it is a good idea to study algebraic number theory without a strong grasp (interiorized knowledge) of basic number theory. Probably you can acquire that knowledge by yourself before next semester, if you are diligent enough to do also a lot of exercises besides reading and understanding the concepts.
I'm sorry but my studies were 30+ years ago. You can try to get a syllabus for the basic number theory course to see what they touch.
I'm sorry but my studies were 30+ years ago. You can try to get a syllabus for the basic number theory course to see what they touch.
Taking an elementary number theory class would be good for more reasons besides content. Having a structured set up like this could allow you to better succeed in the course than an independent study might.
If that's going to be the structure of your independent study then I would say doing that would be better than the class. 1 on 1 time with a professor will allow you to fill out any material from the elementary number theory course you would have otherwise missed, plus you'd be learning the material for algebraic number theory. Now that you've clarified that, it seems that the independent study is the best approach.
[ "What are some cool proved conjectures hat took a long time to be proven?" ]
[ "math" ]
[ "m8klpr" ]
[ 11 ]
[ "" ]
[ true ]
[ false ]
[ 0.92 ]
null
To me, it's the date of publication of the first valid proof. When it is determined to be valid is a separate matter from when it was proven. As an imperfect comparison, suppose an explorer from Durhan reaches San Marcos (previously unknown to Durhan) in year n, then returned home in year n+2 (because ships are slow back in n). I would say San Marcos was "discovered" in n, even though few people knew about it until n+2.
A less well-known but still famous example is the Euler's lucky number conjecture. Euler discovered that for certain value of k, then n -n+k are prime for 0<n<k (these numbers are now called Euler's lucky number). The question is whether there are more of those, and we finally settle it in the middle of the 20th century.
Depending on what you consider a long time:
The abc conjecture, if you consider that proved This brought to mind a question. Assume that Mochizuki had proven abc, but that it had still taken the same amount of time to get things moving. So in our hypothetical universe 2012: Mochizuki publishes the proof 2018: Scholze and Stix fly out, but here they declare it all checks out 2020: In an atmosphere of general acceptance of the proof, it is published to plaudits would we consider the abc conjecture to have been proven in this hypothetical universe?
Come on, you can't drop that and not say what the answer was. Are there more?
[ "Jesse Johnson's lecture notes on Heegaard splittings" ]
[ "math" ]
[ "m8ufjx" ]
[ 5 ]
[ "" ]
[ true ]
[ false ]
[ 0.86 ]
Hello , I hope my post is appropriate (if not, I'll remove it). I have been stuck on MSE answer, because the linked notes have been removed ! Would anyone have a copy of said notes that they'd kindly share with me ? Thanks a lot !
After some googling it seems like you can find them here: https://dokumen.tips/documents/jesse-johnson-notes-on-heegaard-splittings.html
Here you go: . I think the link should work for the next few hours.
Sorry. That was an old-person move. I sent it to you by private message.
It's been deleted already !
eyy, could I get one, too?
[ "Reading PhD Papers" ]
[ "math" ]
[ "m8w65o" ]
[ 8 ]
[ "" ]
[ true ]
[ false ]
[ 0.79 ]
After reading a segment from mathematician Richard Goldbatt's thoughts on new published works, it came to mind how difficult published papers can be to understand even for accomplished mathematicians. For those in graduate school, how difficult is it to read fellow graduates' papers? Is it like deciphering a new language? And how do you go about ambiguity? Including a personal journey(describing one's confusion until the moment of clarity) would also be appreciated. The segment: "Goldblatt strongly suggests category theory as a whole was shaped by generalizing set theory. In explaining 'the style I [Goldblatt] have adopted' he deplores modern mathematical writing which gives abstract definitions before it 'reveals the original motivation,' so that 'the student is not actually shown the genesis of concepts-how and why they evolved-and is thereby taught nothing about the mechanisms of creative thinking.' 'All of this,' he says, 'seems to me particularly dangerous in the case of category theory, a discipline that has more than once been referred to as "abstract nonsense" " Author(s): Colin McLarty Title: The Uses and Abuses of the History of Topos Theory Source: The British Journal for the Philosophy of Science , Sep., 1990, Vol. 41, No. 3 (Sep., 1990), pp. 351-375 Page: 352
It's an exercice so different than reading a book that today I cannot read a book normally... I'm so used to reading the same paragraph for an hour, then moving forward to come back to it later that it has made it very difficult for me to read an actual text A newspapers article is fine, but a whole book has become a pain to me. Is anyone else in the same situation ? Are you personally in this situation ?
I'm in a similar situation but for understanding a physics grad paper. Personally how I read research papers differ from a casual book, because I'm okay with letting things slip by in a casual context. For research papers I feel like I'm completing the task of understanding a new nuance and the language used to encapsulate that nuance. It's a very sensitive process that even the most minor ambiguities in notation leads to frustration. I often think of this quote from Paul Ehrenfest in a letter to Bohr : “I have completely lost contact with theoretical physics. I cannot read anything anymore and feel myself incompetent to have even the most modest grasp about what makes sense in the flood of articles and books. Perhaps I cannot at all be helped anymore.” ---- Gino Claudio Segrè; "Faust in Copenhagen: A Struggle for the soul of Physics" Page 176
You may like to read this thread .
I honestly just trust my gut instincts and use a bit of common sense to figure out how to approach new material. One can fool themselves to keep up a veneer of perfect understanding but that´s an ill defined goal in the first place. So I just try to optimize my time management, figure out the crucial parts that I find worthwile to dig deep into and go from there. That usually works pretty well and I rarely have too much trouble. Then again, maybe the subjects I´m learning are relatively easy.
Reading papers is a skill that you have to develop, and it isn't always easy. Because math is so dense, it requires a very slow and careful approach, especially if the argument is complicated or the author isn't very clear. I usually wind up taking multiple passes through, once to read just the statements of the lemmas and theorems to get a sense of the overall scope of the paper, another reading the proofs, and then a few more to actually try to understand all the details. An important part of this is finding all the details that you don't understand, marking them, and then coming back to them once you have a better sense of the overall paper to try to answer your own questions. I have a pretty mixed track record on actually being able to answer these questions, but it is important to figure out exactly what you don't understand. How do you go about ambiguity Ask my advisor, who is much better at this stuff than me.
[ "Expressing Exponents as Additions" ]
[ "math" ]
[ "m8t8uc" ]
[ 1 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
Is there a known formula for expressing exponents as a series of addition operations? For example, the multiplication of x * y could be expressed as the summation of x over 1 to y: x * y = ∑{i=1 to y} (x) (Excuse the formatting) Is there an equivalent, and recursive, summation for exponents? x ^ y or even x ^ y ^ z ?
There is a multivariable Taylor's theorem in terms of partial derivatives that might be helpful to you.
Are you familiar with calculus and Taylor series?
It uses a function of exponential nature with one variable so,as far as I have understood what you are asking, no non-variable values(constants) won’t affect the method.
Yes but not deep. I understand that f(x) = a’0’ x + a’1’ x + a’2’ x + a’3’ x +... = f(0) + f’(0)/1! x + f’’(0)/2! x + f’’’(0)/3! x
The Taylor serie of e at x=0 or it mclaurin serie is what you are searching for. https://www.mathsisfun.com/algebra/taylor-series.html
[ "\"While most people imagine mathematicians doing arithmetic all day, with really big numbers, the truth is that the discipline requires a remarkable amount of creativity and visual thinking. It is equal parts art and science.\"" ]
[ "math" ]
[ "m8hjw5" ]
[ 1241 ]
[ "" ]
[ true ]
[ false ]
[ 0.96 ]
null
Upon hearing one of his students had dropped out to pursue poetry, David Hilbert remarked "Good, he did not have enough imagination to become a mathematician."
For those seeing this, do note that the article is a dialogue, so it's two people responding to each other. Also, did anyone else find Ghrist's writting a bit purple prose-y?
He really lost me in his first paragraph with this: are you sure Mathematics really is aesthetic? What if its apparent beauty is a ruse or self-delusion? Are we really going to ask "What if x isn't beautiful, but only appears to be?" What kind of question is that?
The fact that most people think math is arithmetic with very large number and does not require creativity is a failure of our educational system. Some thoughts on that here: https://ndworkblog.wordpress.com/2021/03/01/thoughts-on-math-education/
Right? No need to be a dick, Davey lol
[ "Formalising the notion of 'computation'" ]
[ "math" ]
[ "m8l9mh" ]
[ 9 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
Hello, I am hoping someone can point me to a resource (if it exists) that develops the formalism characterising what a 'computation' is in a rigorous way. For some context: My interest stemmed from watching Roger Penrose discuss some of his ideas on consciousness where he references notions that consciousness may be 'computational' or conversely 'not computational'. He also mentions what something called 'computational theories' in the context of physics. So I'd like to understand it in this physical context, but I'm equally as interested in the mathematical formalism behind it. I have asked on both physics and math stack exchange, but unfortunately haven't got any responses (But maybe this is a better place to ask anyway, since this is likely at the intersection of physics/maths/CS/philosophy). I have read some of the relevant wiki pages, but I've been unable to really get it. For example, the described mapping account doesn't seem like a pure mathematical formalism, as it makes reference to physical systems. And some others seem circular If anyone has any enlightenment for a beginner I'd be greatly appreciative! And perhaps I'm totally off base and there is in fact no such formalism, or maybe it is a purely physical concept, or a philosophical one, but that would be great to learn about too.
My sense is that Penrose is a bit of a crank around consciousness. There's a phenomenon, the "Turing Tarpit", where a lot of alternative formalizations of computation turn out to have the same power (in the sense of what is computable). The Minsky machine https://en.wikipedia.org/wiki/Counter_machine is equivalent to the Turing machine https://en.wikipedia.org/wiki/Turing_machine which is equivalent to a Kolmogorov pointer machine https://en.wikipedia.org/wiki/Pointer_machine which is equivalent to a Gurevich "Evolving Algebra" abstract state machine https://en.wikipedia.org/wiki/Abstract_state_machine which is equivalent to Church's lambda calculus https://en.wikipedia.org/wiki/Lambda_calculus which is equivalent to Post's tag systems https://en.wikipedia.org/wiki/Tag_system which is equivalent to Rule 110 in elementary cellular automata (Wolfram's conjecture eventually proven by Matthew Cook). Given the tendency for people starting at reasonable points (Turing's paper"On Computable Numbers, with an Application to the Entscheidungsproblem" where he defines Turing machines is quite reasonable and readable) to end up at the same equivalence class makes people think that this equivalence class is a fairly natural and reasonable concept. The consensus among computer scientists is that human thought is almost certainly in this same category, since it is so difficult to design something that seems natural and leads to a different equivalence class. (Of course you can talk about "a Turing machine with an attached magical oracle that can solve the Halting problem" or "a Turing machine that can run an infinite number of steps in a finite amount of time" - these definitions lead to different equivalence classes, but they're also not very natural.) Penrose is brilliant, certainly, but he's an outlier in believing that this question about human thought is particularly up in the air.
Alternatively: Turing machines . Alonzo Church (Lambda Calculus) was Alan Turing's PhD thesis advisor.
Computability ? https://en.wikipedia.org/wiki/Computability
I think this is my issue, I don't know the field to be able to narrow it down effectively. I think I'm after a rigorous definition of computation. I've seen it defined as a calculation, and obviously I intuitively appreciate something like going from 2*2 to 4 is a calculation, but I'm quite interested to know if there is precise formalisation in a typical axiomatic system. With that said, is a computation simply a function? Edit: Just realised something like lambda calculus is what I'm after!
Thanks, these are great!
[ "Advice for learning multivariable calculus/Calculus on Manifolds" ]
[ "math" ]
[ "m8nxle" ]
[ 1 ]
[ "" ]
[ true ]
[ false ]
[ 0.67 ]
Hey everyone, I'm going to be taking a class next quarter that I believe has Spivak's Calculus on Manifolds as the textbook. I have no multivariable calculus/linear algebra experience and want to make sure I do well in this course. What do you recommend I do? I started looking at the first chapter and I'm having trouble understanding it. Are there any simpler textbooks/online resources that I can look at to supplement this textbook?
If you've never seen linear algebra or multivariable calculus then it is crazy to take a course in calculus on manifolds since you're not going to understand what the point of anything is (as in understand some actual examples beyond things in 1 dimension). Do you really meet the official prerequisites of the course, or is the instructor letting you in by waiving the official prerequisites? Linear algebra is a prerequisite for Spivak's Calculus on Manifolds: he says so in the first paragraph of the preface. For example, the notion of (total) derivative for a mapping in Chapter 2 is defined as a certain linear transformation at each point, so without linear algebra the very idea of derivative in the book will not make sense. Linear transformations (from linear algebra) are something you should have already worked with a lot so the pile of new definitions describing derivatives in terms of linear transformations doesn't completely overwhelm you. While Spivak does not strictly require multivariable calculus, I think you should have already taken that so the concept of partial derivative is familiar. You should also have studied real analysis so you know what compactness and open covers mean and these things are important in analysis. You said you are having trouble with the first chapter of Spivak's book. If the first chapter is not a review of concepts already known to you, then I don't think you're ready for a course based on this book.
Agree with this. Why are you doing this? Is this math 55?
I agree with another commenter that says Munkres Analysis on Manifolds. However, you should also read the first few chapters of a book like Axler linear algebra. I took a class that used this as the textbook, it was extremely hard to read and the exercises are even worse
The OP can't be preparing to take Math 55: note the phrase "next quarter" in the post. Where Math 55 is taught, academic terms are semesters.
I've seen Spivak's Manifold book paired with something more substantial, like Munkres Analysis on Manifolds in this course: https://ocw.mit.edu/courses/mathematics/18-101-analysis-ii-fall-2005/assignments/ ​ good luck, I just got the Manifold's book a few days ago and was surprised at how small it was compared the Calculus book by the same author.
[ "Can you even consider Pi the \"circle\" constant anymore?" ]
[ "math" ]
[ "m8kk02" ]
[ 1 ]
[ "" ]
[ true ]
[ false ]
[ 0.57 ]
Pi is associated with circles - it is the ratio between the circumference of a circle and its diameter. But the more you learn about, the more you see it really isn't just that. It shows up absolutely everywhere, in all areas of math and physics, including ones that have literally nothing to do with circles. The amount of times it shows up is genuinely ridiculous. Now, whenever I watch a video that discusses Pi showing up in some weird and unexpected place, it always somehow connects it to circles, or shows some visualization to make you see how it can be represented as a circle, etc. A good example is by 3Blue1Brown. But is it even fair for it to be called a circle constant anymore? Like, sure, its discovery/invention was in circles and all, but that doesn't justify it being the "circle constant". The fact you can visualize each weird connection with Pi as some sort of circle looking thing doesn't mean that it is a circle constant. Maybe you can connect each weird place Pi shows up in to the infinite series 4*(1-1/3+1/5-1/7+1/9-1/11...), which also equals Pi. So in summary, I'm asking if Pi has an intrinsic connection to circles only, or do we say it is a constant of circles just because that's how it was discovered/is its most known use? Thanks.
Perhaps the circle is simply one manifestation of pi at work in nature, the one we most easily recognise, and the only one most people can get their heads around.
Im with Kevin: "Why waste time say lot word when few word do trick". For me it is just and simple Pi. Mmm Pi 😋.
More reasonably, circles and periodic motion show up a whole lot "in nature" (including in pure mathematics in places that don't seem circle-y on their surface), and the way we describe those using our conventional mathematical language often involves and arclength. /u/AinsleyBoy wrote: literally nothing to do with circles When appears anywhere, there is a circle involved in some way. If you can't see the circle yet, that means you don't yet fully understand the phenomenon in question, and perhaps need to practice your geometrical visualization skills/imagination. infinite series (1-1/3+1/5-1/7+1/9-1/11...), This series is definitionally about circles: It is the inverse tangent projection (or if you like, inverse stereographic projection) from a slope onto an angle measure. If you think showing up in angle measures counts as “weird” and not circular, I don’t know what to tell you. At that point you might as well say that x + y = 1 has nothing to do with circles.
Using the conventional (i.e. non Math English) meaning of names in Math is not a very useful way to understand Math. See all the people who get tripped up on , numbers. Having said this, this is the first I am hearing of someone call Pi, .I mean sure, there are some ways in which you encounter it first, in geometry and such. However, that name is not a usage that I have seen outside of that specific context.
Plus, when pi does show up elsewhere you usually need two of them. Put me on team Tau!
[ "How to find a Hamiltonian path of a graph without brute forcing it?" ]
[ "math" ]
[ "m8g8zc" ]
[ 6 ]
[ "" ]
[ true ]
[ false ]
[ 0.88 ]
Hi! Is there any systemic approach as to how to get the hamiltonian path of a graph without brute forcing it? Like a step by step procedure? Thank you so much!!!
There's a million bucks in it for you if you can find one.
The wikipedia page lists various algorithms whose cost is smaller than the brute force approach O(n!). Unfortunately all these algorithms need exponential time in the worst case as should be expected since Hamiltonian path, as a decision problem, is NP-complete.
There are various algorithms which are better than brute force but still not polynomial.
Every single sentence in your comment is false. Have a look at the definition of Hamiltonian path.
I also chuckled upon reading this. Maybe someone will post their life’s work here in the comments.
[ "Sharing a Introductory Ring and Field Theory Cheat Sheet that I made" ]
[ "math" ]
[ "m8dxac" ]
[ 67 ]
[ "" ]
[ true ]
[ false ]
[ 0.97 ]
Link to the two images: Updated: If you wish to download it as a .pdf, make stylistic changes or just add some content.Here's the link to the TeX on overleaf: An important point to note is. Unless stated otherwise assume all rings in the cheat sheet to be commutative and with 1 ≠ 0. Even though a vast majority of the statements made are applicable to rings in general, for the sake of brevity I did not want to state it specifically each time. I have mainly referred to Dummit and Foote and occasionally Wolfram Mathworld for almost all of the definitions. I have only included matter that is in my syllabus because of this you may notice a quite a few sections missing which I haven't taken from the book. If you notice any errors (I am sure there will be a few) let me know in the comments and I will try and update the TeX and image. Hope this is helpful to someone. The document was entirely based off a template I found on overleaf I previously shared a Intro Group Theory Cheat Sheet I had made. You can check that out here: ​ Edit: Fixed typos (polynomial rings header, CRT wrong variable), mentioned ring without 1 for kernel being a subring.
I was going to say the same, but OP is not assuming rings contain 1 in general, and so does not assume ring homomorphisms preserve 1. Not assumptions I would go with, but this is consistent with their definition of subring.
This is excellent!
The kernel is in general no subring (of R), because 1 should be mapped to 1 (which is usually in the definition of a homomorphism). But the kernel is an ideal of R.
There's a typo on the Chinese Remainder Theorem where I,J became A,B. All very useful stuff. I might suggest putting the "optional" ring properties in a separate section than the axioms though.
Ah yes you're right, I missed that. The statement is true for rings not containing 1. I just didn't specify at each step whether or not we are considering rings with 1. Will specify that when I update it.
[ "Classes" ]
[ "math" ]
[ "cpepkg" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.17 ]
null
If you are asking for advice on choosing classes or career prospects, please post in the stickied Career & Education Questions thread.
Real and complex are definitely good for foundation. Since you're looking to go into theoretical cosmology, I'd additionally recommend differential geometry (and general relativity on the physics side) and PDE for general applicability to physics. Also topology for cosmology and dark matter.
I have not studied it myself, but my understanding is that differential geometry studies curved spaces general in any number of dimensions. The 4D curved spacetime you'd see in General Relativity would be a special case of that. Some please correct me if I'm wrong here.
Differential geometry is related to space curves I'd experience in General Relativity, right?
Unfortunately, your submission has been removed for the following reason(s): Career and Education Questions If you have any questions, please feel free to message the mods . Thank you!
[ "Are there any grad classes that the public can listen into?" ]
[ "math" ]
[ "m8a11v" ]
[ 13 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
null
He hates: all things practical or applied, and graph theory Spoken like a true mathematician
It's not being short sighted, it's just appreciating the mathematics for the mathematics. I personally find a lot of elegance is lost when things become application oriented. But of course to each their own.
Check out the ICTP's recorded lectures https://www.youtube.com/channel/UCBlqfZZYQWKyr6qLAB7LINw/playlists?view=50&sort=dd&shelf_id=5
This is a very nice thing you’re doing for your dad. If he has a list of textbooks that the courses he’s taken have used, that would make it very easy to give recommendations. That, or their syllabi (just a bullet point list of about topics per course would be enough).
http://math.ucsd.edu/~kkedlaya/math204a/
[ "Transfoooooorm to the left! Transfoooooorm to the right!" ]
[ "math" ]
[ "cpgxk6" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.3 ]
null
Shit post! Everybody clap their hands!!
One Hopf this time
what
Are you okay there buddy?
It’s a joke about the Cha Cha Slide. I should’ve said translate instead
[ "The proof for \"the sum of two primes >2 is not a prime\"" ]
[ "math" ]
[ "cph17z" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.33 ]
[deleted]
Well if P and S are primes bigger than 2, they are odd numbers. If you sum two odd numbers, you get an even number. Thus the result is even and not 2, so it cannot be a prime.
p=2x+1 and s=2y+1 for some x,y. p+s=2x+2y+2=2(x+y+1)
What is that even supposed to mean?
No need. That's all there is to it
Yeah I know that but is there a formula?
[ "Study shows we like our math like we like our art: beautiful" ]
[ "math" ]
[ "cpdhjp" ]
[ 4 ]
[ "" ]
[ true ]
[ false ]
[ 0.7 ]
null
the study shows that average Americans can assess mathematical arguments for beauty Given how bad the average American is at mathematics, I find this claim incredibly difficult to believe! ...the four proofs presented are simple, though.
Define "bad at mathematics." Of course the average person is "bad" at mathematics in the sense that the majority of people are not trained to be professional mathematicians. And of course they can't appreciate highly specialized results--just like you probably can't appreciate highly specialized results in other fields. I feel like your argument is similar to: you can't enjoy art or classical music unless you are trained in art or classical music. It's just not true.
People are bad at math only in the sense that they are poorly educated. It doesnt mean they cant think mathematically. I didnt truely find the joy in math until I started self-studying my own interests - this is after getting my undergrad degree in aerospace engineering and taking several additional math classes. Hell, I remember getting laughed at in this introductory class because I asked a question about definiting curvature intrinically to show some point was in the attracting region of a stable fixed point. The answer to my question was essentially the Lie derivative.
Bad in the sense that they can’t handle simple algebra..
Can you do a basic painting? If not, I suggest not claiming you like art! (At least according to the argument at hand.)
[ "If 0.9999999... = 1, does 1.000...01 also = 1?" ]
[ "math" ]
[ "cpfewe" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.4 ]
[deleted]
1.000...01 doesn't mean anything.
0.999... is not an infinitesimal small amount less than 1. It is 1.
Note that, in standard analysis, what you are saying is meaningless. 0.9999... is just a different decimal representation of 1--it is not a representation of a number that is an infinitesimally small amount less than 1. However, if you do want to introduce infinitesimals, you should look into the hyperreals where you do have notions infinitesimals. It is important to note, however, that even for the hyperreals, 0.999... is still another decimal representation for 1.
Well 1.00000...001 doesn't exist. Assuming you mean that there's an infinite number of 0's and then a 1 after that. But since infinite 0's wouldn't have an end there is no after an infinite number of 0's so 1.0000...001 can't exist. But if you just meant some arbitrarily large, but finite number of 0's then no because the difference between them would be 0.0000...001
No that's incorrect. 0.99999... is not smaller than 1 because if it were smaller than 1 it wouldn't be equal to 1. 1 and 0.99999.... are two ways of writing the exact same value
[ "Apart from cryptography, why do we care so much about understanding the structure of prime numbers? What exactly about prime numbers is hiding our understanding of the world?" ]
[ "math" ]
[ "m88n7s" ]
[ 444 ]
[ "" ]
[ true ]
[ false ]
[ 0.98 ]
Search anything about primes on a search engine and all you will see is the same cliché use for large primes for encryption. In layman’s terms, why do we care about understanding the structure of prime numbers? Apart from the love of mathematics of course.
Saying you care about prime numbers because of cryptography is like saying you watch porn for the plot. Some of us are just horny for number theory, alright?
The romantic answers given so far are not wrong, but can I simply add: .
I normally find them really unappealing on their own but the structure itself is just so nuts. That such a simple idea connects to basically the entirety of math is such a beautiful idea. Its one of the first areas of math humans tried to understand and its still giving us new ideas and insights. And of course, every mystery we solve in one area may help us in unrelated areas.
I think this is probably the most serious answer for people that seriously do theoretical research on primes. Feynman did point out what mathematics is, ultimately.
On top of the other answers, the standard answer about an "obviously useless" problem like Collatz also applies here: answering questions about prime numbers will probably involve discovering/developing new tools we didn't have before, which could have hidden applications elsewhere.
[ "Quantum Search for the Everyday Linear Algebraist" ]
[ "math" ]
[ "cp2dt2" ]
[ 286 ]
[ "" ]
[ true ]
[ false ]
[ 0.97 ]
null
I'm the author of this blog, and I've written this post as an outside glimpse into to the field of quantum algorithm theory, while assuming a solid working knowledge of linear algebra to speed things up. Hopefully it fulfils its purpose. Any form of feedback is, of course, appreciated.
This is a standard notation particularly in quantum physics
I'm probably what you would consider the average linear algebraist, and I really enjoyed the article. I found the bra ket notation strange as my understanding is that it's a complicated way to say N×1 complex vector and the hermitian of that vector. Is this common notation for physicists?
It's completely standard in physics, starting in approximately the second semester of undergrad quantum mechanics.
I’m in QI, and yes, it’s completely standard to use bras and kets in our field. I find it convenient because it implicitly implies that any vector written using that notation is a unit vector, so you don’t have to keep explicitly stating that (since most vectors we use in QI are unit vectors).
[ "If proofs are equivalent to programs (in a precise sense!), then how can proving be taught to be as accessible as programming?" ]
[ "math" ]
[ "cpd1rz" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.68 ]
null
Working with this equivalence for any but the most simple propositions requires a pretty sophisticated type system. Any programming language where you can prove interesting statements will in general be only be accessible to people with either a fairly advanced background in mathematics or computer science (ideally both).
Have a look at proof assistants. Philip Walder has been writing one for Agda with a goal similar to the one you are stating.
My first answer lacks a word. He is writing a book about software foundations (~ math/logic and prrofs) with Agda : https://plfa.github.io/
I think one part of what makes programming more accessible is the availability of feedback at an early stage. As a beginner programmer, if you have written a program, you can run it and test it, and if something doesn't work, the computer will tell you. If you are beginning to write proofs, you need to have another human (with a decent knowledge about proofs) look at it for feedback, something which is not always readily available. And because proofs written by humans are not as unambiguous and well-defined as computer code, it can be harder to get the precise feedback you want and need. These are obviously not always true. There are cases where running a program fails to unveil some design flaw, and the feedback you get from another human can be precisely what you need, but I think this availability of feedback is a main part of what separates programming from proofs.
Programming is no more accessible than theorem proving, imo.
[ "Grad Algebra Book with Best Problems" ]
[ "math" ]
[ "cphky1" ]
[ 8 ]
[ "" ]
[ true ]
[ false ]
[ 0.89 ]
In your opinions, which grad algebra books do you think have the best problem sets? I have Aluffi, Jacobson (basic algebra I & II), Lang, and Isaacs on my bookshelf. I’m looking for recommendations besides Dummit & Foote since I used that for my undergrad algebra classes. Edit: I’m 100% open to recommendations besides Aluffi, Jacobson, Lang, and Isaacs.
The exercises in Aluffi are as easy as Dummit & Foote’s book, so you won’t gain much doing those. As such, I’d recommend using Lang or Jacobson. In terms of actually leaning stuff, I think Aluffi and Jacobson probably have the best exposition.
Honestly, Aluffi doesn’t really make any serious use of category theory in the exercises (other than the homological algebra chapter of course), whereas the category theory is more integrated in Lang and Jacobson, which is why I recommended those.
Are you just doing a bunch of problem sets for fun, or is there something you're trying to learn?
(Sorry, didn't read that you'd already worked through Jacobson.) Also Field and Galois Theory by Morandi was nice to work through, the problems at the beginning of the section are typically definitional/computational, but then get interesting past that. Alternatively, I have this book called Problems in Group Theory by Dixon which was tons of fun to work through.
I haven’t worked through Jacobson yet; they’re just on my bookshelf since I found them for about $8 a piece at a used book store. But I’ve read part of Jacobson I and like his prose I’ll check out Dixon, thanks!
[ "Math Club website suggestions?" ]
[ "math" ]
[ "cpck4b" ]
[ 1 ]
[ "" ]
[ true ]
[ false ]
[ 0.56 ]
Last year I started up a math club at my college, we've planned some fun events with the help of one enthusiastic professor haha, however we are still really small, and it has been difficult getting people to participate in our events. I had an idea last night that I could start up a website for our club, maybe post interesting problems people could work on, have a calendar for the events we do plan, maybe a blog where I can post about research being done in the department. I think this is a good idea, because it doesn't reqire people to show up to meetings, which can be difficult, so people can participate at their own pace. I also hope that the professors will get excited about this idea too. Does anyone have any experience with running a website for a math club? Any suggestions for fun things I could post there?
Post pictures of people eating food . In college everyone is poor and starving. People will come check you guys out if they think they are going to get some free grub. Edit: I know it’s not really the focus of a math club, but it will get stragglers to come in.
Fellow math club organizer here! If your college doesn't have any sort of official community management system, a website might work well. However, you might want to also set up a Twitter or Facebook or Instagram, just so that people can learn of your club's existence in the first place. I've gotta run to work, but I'm just staring up my math club and if you'd like to talk some more, shoot me a PM!
We have an Instagram! Its been a great way to get our name out there I was definitley thinking about the website being more for the community, hopefully getting people more active 😊
I suggest you get more connected with your math department. See if they can host your website (they also might have tools for making the website), and if they can send department wide emails about events. This is how I found out about the math club at my undergrad. Also, include pizza. Always include pizza.
After a few friends and I started the Math Club at my college, we found it easier to post things on our instagram than a website. It also made it easier to grow the club. Meetings are hard because everyone is so busy but the in person social interaction is nice. We also found that people enjoyed when we posted math jokes/memes between more serious posts.
[ "Process over state: Math is about proofs, not theorems." ]
[ "math" ]
[ "cpbgj9" ]
[ 550 ]
[ "" ]
[ true ]
[ false ]
[ 0.96 ]
null
The theorems are your tool kit. You cant fix a car with just a tool kit, you need know how.
And you can't fix it without it. It seems absurd to me to separate proofs and theorems. A given theorem is simply a step in the proof of another theorem.
It's kind of fun to view this blog post via the Curry-Howard correspondence , where it says that programming is about programs rather than types.
I'm pretty sure that if someone thinks mathematics is only about theorems then they aren't a mathematician.
That’s very true. Lol that’s a very mundane and realist view of your theorems, but it’s true nonetheless. I think the theorems are more compact and straightforward than many of the steps to derive the theorem. There must be some reason to bold “the theorem” rather than the other steps used to derive the theorem.
[ "What Are You Working On?" ]
[ "math" ]
[ "cpeina" ]
[ 8 ]
[ "" ]
[ true ]
[ false ]
[ 0.76 ]
This recurring thread will be for general discussion on whatever math-related topics you have been or will be working on over the week/weekend. This can be anything from math-related arts and crafts, what you've been learning in class, books/papers you're reading, to preparing for a conference. All types and levels of mathematics are welcomed!
fibre bundles and riemannian geometry
Recursive functions and fractal geometry.
I’ve been organizing meetings and a small math competition at my high school for Mu Alpha Theta (our math honors society) since i’m the president! I’ve also been practicing for the SAT math level 2 subject test in hope of getting an 800 on August 24th
That's quite a clean proof.
That's quite a clean proof.
[ "Visualizing higher dimensions in Linear Algebra?" ]
[ "math" ]
[ "cp0q44" ]
[ 6 ]
[ "" ]
[ true ]
[ false ]
[ 0.64 ]
2d transformations are a piece of cake, and 3d stuff is manageable. But once it starts going "n-dimensional" on me, I start to lose the visualization. What does the determinant of a 4x4 matrix geometrically? How can I visualize the column space of a 5x5 matrix with rank 4? And how does the line of the kernel travel through that 5d space?
To deal with a 14-dimensional space, visualize a 3-D space and say 'fourteen' to yourself very loudly. Everyone does it. --Geoffrey Hinton
And the unit sphere in L spaces is a sphere that's, like, shiny or something.
At some point if you can only "see the math visually" you're going to have a hard time. I think that the best thing to do here is to work through linear algebra proofs in 3D and again in 4D and see how the results are the same each time. This should help you to gain some familiarity.
absolutely. the only way i really 'visualise' higher dimensions is by drawing a rectangle/circle and saying "ok this represents this set or this n-dimensional space" so i can relate some other illustrations to it. no one can literally draw an n-dimensional object.
The good thing about Linear algebra is that you often don't really have to directly visualize anything, because the spaces it studies are the simplest possible. You don't need to visualize 12d space to know that its 9 and 5 dimensional subspace in a generic position intersect in a (2d) plane. The 4d determinant is the signed volume of a 4d paralelepiped, what exactly do you need to see it for? Column space of a 5x5 rank 4 matrix is a hyperplane in R etc. Of course, some things definitely need getting used to (eg. to understand rotations and 2-vectors in nd it's enough to do so in 4, but not 3d), but the great (and unique) thing about LA is that you can relatively easily get used to it and understand it without actually seeing it.
[ "What math misconception (statement seems true, but actually it isn’t) do you have in mind?" ]
[ "math" ]
[ "cpaljx" ]
[ 46 ]
[ "" ]
[ true ]
[ false ]
[ 0.85 ]
null
Every totally ordered chain of subsets of the natural numbers is countable (with respect to strict containment of sets). Intuitively you have to add at least one new natural number at each step and you should therefore run out of natural numbers after countable many steps. Yet this isn't true. To see this pick your favorite bijection f from the natural numbers to the rationals. Then if r is a real number we define A_r to be the set of all natural numbers n with f(n) < r. This yields an uncountable strictly increasing totally ordered chain of subsets of the naturals.
That the (product of the first N primes) + 1 is not necessarily a prime itself.
However, you guaranteed that none of its prime factors are among the first N primes.
This seems like notational abuse. An "identity function" should have the same domain and codomain as objects in the relevant category, not just isomorphic as underlying sets. If you're thinking of functions like f(x) = x with domain (R, std) and codomain (R, discrete), that's not really an identity function in the category Top. This relates to not confusing a function with a formula!
What is this black magic
[ "A question from a Math book." ]
[ "math" ]
[ "cpguh6" ]
[ 0 ]
[ "Removed - try /r/learnmath" ]
[ true ]
[ false ]
[ 0.29 ]
null
haha, I'm sure they can, but I bet they won't. Why not just post in one of the subreddits they linked you to? It'll just take a minute to repost, and you'll get a much more friendly response I'm sure.
From the sidebar: Homework problems, practice problems, and similar questions should be directed to /r/learnmath , /r/homeworkhelp or /r/cheatatmathhomework . Post in those and people in those subs will help you out.
From the sidebar: Homework problems, practice problems, and similar questions should be directed to /r/learnmath , /r/homeworkhelp or /r/cheatatmathhomework . Do not this type of question in /r/math .
From the sidebar: Homework problems, practice problems, and similar questions should be directed to /r/learnmath , /r/homeworkhelp or /r/cheatatmathhomework . Do not this type of question in /r/math .
Unfortunately, your submission has been removed for the following reason(s): /r/learnmath /r/homeworkhelp /r/cheatatmathhomework /r/math If you have any questions, please feel free to message the mods . Thank you!
[ "What ever happened to Abstraction Theory?" ]
[ "math" ]
[ "cp78o1" ]
[ 33 ]
[ "" ]
[ true ]
[ false ]
[ 0.81 ]
I've seen it mentioned in a few papers, but... did Abstraction Theory ever lead anywhere, or did it never start to begin with? It seems like an incredibly interesting field of study, but it seems rather... defunct.
I’ve never even heard of it - care to post a link to a representative paper?
I may be wrong, but as I imagine it, it would probably be closer to Polymorphism's effects on 'programming power'. For instance, if I have Ball, Cone, and Cylinder objects which extend Shape, what can I do with just shape? ​ It seems to be more philosophical than mathematical, at the moment.
Reading the incredibly vague definition... Would this be similar to encoding in CS/ML?
Okay that makes more sense. Was trying to wrap my head around the concept with such limited documentation 😂
There is still a lot of philosophical work done on abstraction principles, but it is pretty much only done by people who are interested in a specific historical research program (i.e. updating Frege's ideas about the foundations of mathematics). For example... https://global.oup.com/academic/product/abstractionism-9780199645268
[ "Is u = u(x,t) considered an abuse of function notation? In what cases should it be used?" ]
[ "math" ]
[ "cpfzs4" ]
[ 0 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 0.4 ]
null
As with all communication, the goal is to be unambiguous and convenient. Here, u = u(x,t) is short for "u is a function of x and t, but it will be much more convenient if we don't have to write that every time we write u, so we won't". That's fine.
= + This is the same kind of "abuse of notation" if you are thinking of the symbol as referring to the map. The definition of would be := ( , ) ↦ + Arguably in many physical modeling contexts it makes more sense to think of a relation between three variable quantities instead of thinking of the mathematical formalism of maps based on set theoretic foundations per se. Then = + might be a better notation than ( , ) = + . Especially if the relation is something a bit more complicated like = + .
do you consider function-variable overloads like u = u(x,t) and y = y(x) abuses of notation This isn't a matter of "consider"; this is an example of the thing mathematicians call "abuse of notation". But "abuse of notation" is a term of art in mathematics, and it's not pejorative - indeed, it's recognized as a good practice when done well. if so, are these abuses better than their alternatives? Almost all abuses of notation are what, in CS terms, we would call implicit type casting; there's rarely confusion between u and u(x,y) because they have different types, and one can usually figure out from the context what the type needs to be, and therefore which is meant. The alternative - introducing a lot of careful notation to distinguish these things - generally involves a whole bunch of overhead establishing conventions, and turns out to be harder to read, especially when it's new: if your reader is used to this convention, they're in the habit of parsing it correctly, but it would slow them down to keep remembering "wait, which one is the calligraphic one?" But there are always exceptions: it's possibly to write carelessly in a way that makes statements actually unclear, or at least very hard to parse, and there are situations which make such extensive use of functions and values of functions in complicated ways that it's worth the extra effort to make the distinction more carefully.
What does that mean? Aren't those the same thing? No, because u is the mapping itself (i.e. an element of a function space), where as u(x,t) is the image of (x,t) under the mapping u. Strictly speaking, u is a function, but u(x,t) is a number. My question is whether using the same symbol for both is appropriate.
Writing mathematics is sometimes an art of abusing notation. It's fine.
[ "Why do we use large PRIME numbers in RSA cryptography?" ]
[ "math" ]
[ "cpcg9j" ]
[ 0 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 0.33 ]
null
RSA is easy to reverse-engineer with small numbers.
Because prime numbers cannot be factorized down into products of smaller numbers. The idea is to use two sufficiently large prime numbers so the product of these numbers cannot be factorized easily.
Because prime numbers cannot be factorized down into products of smaller numbers. The idea is to use two sufficiently large prime numbers so the product of these numbers cannot be factorized easily.
IIRC RSA uses prime number because of your first statement. It makes it simple knowing the private and public keys to decrypt the message in a few steps. Now RSA uses large prime numbers because we don’t have any equations to know the prime numbers directly. One of the way of finding a prime number is iterative. Meaning that the larger the prime number, the longer it takes to find it. This allows for a rapid communication of an encrypted message with noone being able to decrypt it in a short time without the private key (which contains the large prime number).
Some important properties, primes have no other divisors except 1 and itself. Also, in _p (the integers mod a prime), you avoid zero divisors. Coprime is also an important property for RSA (gcd(a,b)=1). I'm assuming you pick primes so that it's really hard to guess the solution.
[ "Fraleigh's book about abstract algebra" ]
[ "math" ]
[ "cpd12i" ]
[ 0 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 0.43 ]
null
I don't have Fraleigh's book on hand, so take with a pinch of salt, as I don't know what other definitions / context he was working with. In the context of a finite groups, composition series and subnormal series are roughly equivalent. Supposing a composition series exists (which they always do for finite groups), then any subnormal series can be refined to a composition series by (quoting from wikipedia) "inserting subgroups into the series up to maximality". In the context of infinite groups, yes these definitions are not the same, ℤ being an excellent counterexample as you stated. If you have a pdf on hand, section 4.6 in Nathan Jaconbson's Basic Algebra I has a nice discussion on the interplay between the two, if you have a copy I'd suggest looking at it, if not PM me.
Definitions in math aren't set in stone. While the second definition of solvable is definitely the standard one, it isn't 'wrong' to define solvable to mean something different.
I'm not as far into advanced math as most people on this subreddit, so forgive me if this is an obvious question... but is that really the right way to look at things? Obviously you can make your own definitions and create your own systems, but part of the purpose of creating universal definitions is for communication. Part of why we have standardized notation is to ease collaboration and sharing. I haven't gotten this far into abstract algebra, but it seems like the above definition for a 'solvable group' is a relatively well known, foundational concept, right? Like... if we had two different definitions for a Hermetian matrix, I would expect one to eventually lose out culturally, and a 'correct' definition to become synonymous with a single concept that everyone uses to reason and communicate. Reading Judea Pearl's 'Causality', it struck me how in flux a lot of the concepts, definitions, and nomenclature was, but that seems like the natural state of a mathematical theory that's still evolving into it's final 'crystal' form, you know? Like, you CAN use non-standard notation and such, but isn't it always better for the sake of clarity to use what you need to to make sure you're properly understood? It's not like this is just for scratch paper after all, the point of math in a communal context is for unambiguously and quickly communicating challenging concepts. Grammar and vocabulary in human languages can be in flux too (slang, etc) but it's always ideal to have a lingua franca when having different groups trying to talk to each other, or at least it seems that way to me.
I would expect one to eventually lose out culturally, and a 'correct' definition to become synonymous with a single concept that everyone uses to reason and communicate Indeed, that is what one would probably expect. But take a look at this link for instance. There are a lot of examples in modern math where there are still multiple different definitions in use for things that have been around for quite some time. For instance, the definitions of the natural numbers (does it contain 0 or not), a ring (does it need to have a multiplicative identity or not), topological group (does it have to be Hausdorff or not), algebra (should it be associative, does it need to have a multiplicative identity). Don't even get me started on the ⊂ symbol (does it mean any subset or proper subset???). Most of the alternate definitions have only minor differences between them, but it definitely shows that sometimes multiple definitions continue to stick. All of this is really just on the side. In the specific case of this post, it is definitely true that the definition Fraleigh uses is non-standard, and I have never heard anyone else use it either, and in that sense I agree with the general sentiment of your comment. The point of my comment was mainly trying to point out that while indeed non-standard, and awkward for the sake of clarity, it is not though, as OP does seem to believe. Like you said, it is generally not really useful to create your own definitions when there are standards, but that doesn't change the fact that it isn't wrong, or a mistake, it's just awkward.
Unfortunately, your submission has been removed for the following reason(s): /r/learnmath books free online resources Here If you have any questions, please feel free to message the mods . Thank you!
[ "Vector calculus book with a mix of differential geometry to it, above the \"plug'n'chug\" level" ]
[ "math" ]
[ "cp2xc2" ]
[ 8 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 1 ]
null
I love Hubbard & Hubbard it brought everything together for me and truly prepared me for advanced math.
Take a look at multivariable mathematics by Theodore Shifrin. You can even watch lecture videos based on the material in the book on YouTube taught by the author if you search math 3500/3510.
It would probably be pretty hard to read the book without the class. Source: I'm a student in the videos.
Take a look at by James J. Callahan .
Darling AKA differential form all the things.
[ "The square root of Pi... Means?" ]
[ "math" ]
[ "cow8ft" ]
[ 0 ]
[ "Removed - incorrect information" ]
[ true ]
[ false ]
[ 0.38 ]
null
You started with an incorrect assumption. Pi is a constant, not a function.
pi is the derivative of the circumference function C = pi x D, with respect to diameter. This is not so stupid as one might think, because C is the derivative of the area function A = pi x r^2 with respect to radius. The fact the ratio of circumference to diameter is constant is nontrivial, and false in non-Euclidean geometry, for instance on the surface of a sphere.
Well, if anything, pi is a set of functions and therefore has no derivative in that sense. And the square root of pi is just another set of functions.
I mean there is the PI function (the gamma function but it's actually the factorial)
I mean there is the PI function (the gamma function but it's actually the factorial)
[ "Mind. Blown. 🤯" ]
[ "math" ]
[ "cp30lx" ]
[ 0 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 0.43 ]
null
I dislike the fact that he's talking about a uniform distribution sample etc, as it makes the question and its solution complex, not less. The two daughter problem is actually really not difficult to understand once you grok the subtlety that an "older daughter" and a "younger daughter" aren't simply interchangeable things as electrons* might be. The problem with the "if I ask a question to get information" thing isn't about draws from a hat, it's simply that "if I ask a question to which I might get distinct answers", there are twice as many possible outcomes which I then have to multiply the set count by. Looking at it right, you can see that this is exactly the type of thinking that leads to the colloquial joke of "what are the odds of (something rare) having happened?" being answered with "100%". * Honestly, if you really want your mind blow, take a look at entanglement probabilities and the fact that electrons are indistinguishable (being quantum objects and all), and therefore are counted in a different way. I'm running out of battery but can later edit this post with a link (unless some good samaritan can provide one).
Goodness gracious that is confusing! But I finally understand it, after reading and rereading. Probability theory really is the least intuitive area of mathematics.
Essentially it's the difference between two boolean connectives, OR and LEFT-TRUE. Given that you know P OR Q, there are three possible combinations of P and Q. Given that you know P LEFT-TRUE Q (or right-false, or right-true, or left-false), there are only two. Those four connectives are obviously rarely used but fit well as a metaphor in this specific case, if you think of "Statement P is true" as "Person P is female".
I think this highlights how it can be useful to think of probability in terms of “if [experiment] were repeated many times, how often would we expect to see [outcome]”, because it forces you to think about what the experiment and outcome are. In this case- is the experiment “take a random parent of two children and ask if they have a daughter” or “take a random parent of two children and ask them to tell you the gender of one of their children (each child being equally likely)” or “take a random parent of two children and ask about the gender of their oldest child”... etc.
I agree, it is very confusing. The 1/2 to 1/3 based on sight really threw me off! 😂😂😂
[ "Need new graphing software" ]
[ "math" ]
[ "mao76x" ]
[ 1 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 1 ]
null
GeoGebra probably, between what is csc?
The cosecant is defined as csc(x) = 1/sin(x).
Depending on the purpose of this graph, have you tried just graphing the positive and negative versions of your left side separately?
I doubt you can find a software that does not glitch when x is in the proximity of Pi, since csc(x) = 1/sin(x) and sin(x) goes to 0. So x / sin(x) becomes a very large number near x=Pi and applying tan does not help so the function tan(x/sin(x)) oscillates horribly near Pi and even Mathematica cannot really plot it there.
Why does have absolute value bars? That is the source of the glitchiness; it makes those discontinuities extremely dense and most programs don’t bother to graph them properly.
[ "How much freedom do we have when extending the definition of a function?" ]
[ "math" ]
[ "majw3v" ]
[ 1 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 0.67 ]
null
You are completely free to define a function however you want. Maybe you want to look up the formal definition of "function".
Analytic continuation means that if two holomorphic functions of the same connected domain agree on a set with accumulation point then the original functions have to be equal on the whole domain. So no, you only have freedom when it's not connected. Once inside the same connected component and if the function can be extended, there is a unique continuation. That's why we talk about analytic continuation
Often times you care about preserving some property. A simple example would be filling in a "removable discontinuity" in an otherwise continuous function. The function f(x) = x(x-1)/(x-1) has such a discontinuity at 1. If you choose F(1) = 1 and F(x) = f(x) everywhere else, then F is a continuous extension of f, which might be very useful. You're "free" to choose F(1) to be whatever you want in the sense that no one's going to come and tie you to a chair while they shred your papers and snap all your chalk in half. But you might want to impose that limitation on yourself. Similarly, there's the concept of "analytic continuation", where you have a function defined by a formula on some open subset of the complex plane and analytic there. The formula doesn't make sense (doesn't converge to a value) outside of that subset, but there's a unique way to define the function at other places in the plane where the extended function is also analytic on that larger domain of definition. Analytic functions are billions of times nicer than general functions, or even smooth ones, so people prefer that kind of extension to a general one.
The most natural such way is through something called analytic continuation. In short, imagine you have a `nice' function defined on any `small patch' in the complex plane. Then, there are some very nice theorems that say there exists one and only one `nice' function that can be defined on a larger patch, but still takes the same value as the original function on the small patch. This is in fact applicable to the case of the powers of two. For an infinite geometries 1+r+r^2+r^3+.. the result is given by 1/(1-r), if |r|<1. In this case, the small patch are all complex numbers with absolute value smaller than one. The `nice' function is 1/(1-r). This function is nice for all values of r!=1, and agrees with the values we had in our small patch. So by these nice theorems, this is the only such `nice' function. Small patch here means open subset of the complex numbers, `nice' means analytic/meromorphic.
You're conflating a number of different ideas. First, the equation you've described is "f(x) = x = 69." You can let that first equality, the one relating f with a calculation" be whatever you want. However, then that calculation will be whatever it will be, and not necessarily 69. (I suppose that you can define any number to be equal to 69, but then you have a system of logic that contains contradictions and you can't do anything useful. But that's a conversation for another time.) Second, for the analytic continuation, where one exists, it is already defined in terms of another function, not in terms of whatever you feel like. Once you've defined a function, there is no choice to be made on its analytic continuation. Someone else said to look up the formal definition of a function. Here's a formal-enough definition: "given 2 sets, A and B, a function f "takes in" an element of A and "spits out" an element of B. Furthermore, for any element a in A, f(a) is defined. Also, f(a) can only take on one value from B." Notice that the sets need not necessarily be the real numbers; they can be any 2 sets. When you're defining a mapping from one to the other, you can pick whatever you want. If you start placing further restrictions on all possible functions between your sets, whether or not the single function you picked out follows those restrictions is a matter of logic. This is independent from actually making a choice.
[ "Has anyone invented this?" ]
[ "math" ]
[ "majoxw" ]
[ 1 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 0.67 ]
null
Seems to be correct. It is not something new as in nobody ever discovered it before you, but it new as in you didn't have that formula, decided that you need it and made it. It is very important to be able to do that.
Thanks!
Here’s a fun challenge/exercise: given the length of the sides of a triangle, calculate the area of the triangle. I don’t know what level of math you are at, so I’m going to guide you through the steps as best as I can without just giving it all away. By the way, the formula we’ll derive here is equivalent to Heron’s formula but was derived by the Chinese independently of the Greeks (although Wikipedia is looking for a citation on that claim). Place a triangle so that the longest side is on the x-axis, and has 2 vertices at C=(0.0) and B=(a,0). The third point will be at A=(x,y), where 0<=x<=a. These names may look odd, but they ensure that point/angle A is across from side a (which conveniently has length a). So, side b is between points A and C (and has length b) and side c is between points A and B (and has length c). Note that this triangle has Area=(1/2)*ay. 1) Using Pythagorean Theorem, show that b = x + y 2) Using Pythagorean Theorem, show that c = a - 2ax + x + y 3) Substitute b = x + y from the first formula into the second formula and solve for x to get x = (a + b - c )/(2a) 4) Back to the first formula again, solve for y to get y = sqrt(b - x ). Substitute the x from step 3 to get a formula for y that is only in terms of a, b, and c. 5) Substitute the formula for y from step 4 into the formula for the area given above, A = (1/2)*ay. Congratulations, you have just derived a formula for the area of a triangle that is only dependent upon the lengths of the sides of the triangle.
Thanks for this comment. Btw is this Heron's formula? If it is, how can I simplify it to √s(s-a)(s-b)(s-c)?
If you start at Heron’s formula, you can write: s = (a+b+c)/2 s-a = (-a+b+c)/2 s-b = (a-b+c)/2 s-c = (a+b-c)/2 Then you can re-write Heron’s formula without s (ie you’d only have a, b, and c). I imagine if you multiplied that beast out and simplified you’d eventually get back to this new formula, but I’ve never actually tried to do it.
[ "Does everyone already know about this? nth roots raised to nth roots n times = n...." ]
[ "math" ]
[ "mabvgs" ]
[ 0 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 0.44 ]
null
You actually proved that (√2 ^ √2) ^ √2 = 2. And this is not surprising since (a^b)^c = a^(bc), and √2*√2 = 2. It is not true that √2^(√2^√2)=2. Certainly, the only value of x for which √2^x=2 is x=2, since √2^x is increasing. Thus, √2^(√2^√2) is not 2, since √2^√2 is not 2, since √2 is not 2.
Here's a fun related problem. Consider x ^ (x ^ (x ^ ...)) = 2, with infinite nested exponents. What is x? If you do the "usual substitution thing," you can get the equation x = 2, so x = √2 is a solution. However, what if we instead had the equation x ^ (x ^ (x ^ ...)) = 4? Well doing the substitution, we get x = 4, so x = 4 is a solution. But wait, 4 = √2. So the infinite nested exponent expression √2 ^ (√2 ^ (√2 ^ ...)) is simultaneously 2 and 4. Does this mean 2 = 4? Why does the "usual substitution thing" sometimes fail with these infinite expressions? What does √2 ^ (√2 ^ (√2 ^ ...)) actually equal?
It is false. If it were true, we'd have n=(n^(1/n))^n=(by assumption)=(n^(1/n))^(nth root to nth root n-1 times). Taking log of that we have n=nth root to nth root n-1 times Repeat that n-1 times and we get n=n^(1/n), which is clearly not true for all interesting n. The error is in the third line -- they seem to use something like log(a^b)=a*log(b) which is not how logarithms work. The equality begins to work if you invert the order of operations: (sqrt(2)^sqrt(2))^sqrt(2)=2.
This is a cool way of highlighting the "infinite expressions break" idea
The order of operations should be ln(a = b * ln a; given that, it seems to me the proof still works if you interpret the order of ops in the opposite direction, which means it’s something other than the direct chain of exponents...I’ll take a look at it again.
[ "Trash grades in 8th grade." ]
[ "math" ]
[ "ma79m0" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.43 ]
null
Take a critical look at your study skills. A professional tutor can help with this. There's probably some YouTube tutorials on study skills (and probably math specifically) if you look, though your mileage may vary there.
Get a tutor.
Second this. Khan Academy is an outstanding resource.
Second this. Khan Academy is an outstanding resource.
r/learnmath
[ "Is it possible to estimate the area of a bifurcation diagram using a Monte Carlo simulation?" ]
[ "math" ]
[ "ma4hdv" ]
[ 7 ]
[ "Removed - see sidebar" ]
[ true ]
[ false ]
[ 0.67 ]
null
If I understand correctly you want to calculate the area/measure of a set S determined by weather or not a logistic equation diverges (e.g. the Mandelbrot set). To do this you need a random variable X:O-> E where E is a superset of S, and an indicator function f:E->{0,1}. You then calculate the area of S as a proportion of E weighted by the probabilities of each outcome observed. Note that the indicator function may not be decidable as is the case with the Mandelbrot set, but your estimating anyways, so that shouldn’t matter too much. As an example you may try to estimate the area of the Mandelbrot set in the range (0,1)(0,1), so you could uniformly generate points in the range, and the proportion of the points in the set would be your area estimate.
well idk he asked and I just answered you don't have to do a deep analysis with quotes and all about it
well idk he asked and I just answered you don't have to do a deep analysis with quotes and all about it
Well he asked so
Well he asked so
[ "Pi is approximately 22/7 !!!" ]
[ "math" ]
[ "ma3bmw" ]
[ 0 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 0.25 ]
null
Umm yeah? , it is mean close but not the same 100%
22/7 is not exactly equal to pi. 22/7 is approximately equal to pi. 22/7 = 3.142857... This is accurate up to 3 digits, which makes it a good approximation. https://en.wikipedia.org/wiki/Approximation
Thank you i get it now!! :))
Neither video says pi is exactly 22/7. You might want to rewatch the second one.
And i saw prove vid that 22/7 is not approximately to pi prove here Have you actually watched the video? Do you know what 'approxiametly' means? Even π ≈ 10 within an apropriate context; it is never a claim that they are actuall equal.
[ "Does anybody find Nancy Pi the Youtuber's videos helpful?" ]
[ "math" ]
[ "manvwn" ]
[ 1 ]
[ "" ]
[ true ]
[ false ]
[ 0.6 ]
[deleted]
Your comment could have gone without the ogling. That's childish and detrimental to academia as a whole; why did you feel the need to say this? The focus of someone's work, even if it is educational youtube videos, should be . Not how they look.
Yes, but also brilliant first and foremost!
Yes, but also brilliant first and foremost!
Oh yeah, I had a blast. I love math and I had a good excuse, quite the twofer.
Oh yeah, I had a blast. I love math and I had a good excuse, quite the twofer.
[ "Parity Bitmaps from the OEIS" ]
[ "math" ]
[ "mainqs" ]
[ 33 ]
[ "" ]
[ true ]
[ false ]
[ 0.96 ]
null
Hi Peter, nice pictures. (I must came back editing sequences for OEIS...) It is not clear to me how the sequence is converted in a two dimensional array. (I'm not fluent enough in Python to check the code.) For example, A048152 is "Triangular array T read by rows: T(n,k) = k mod n, for 1 <= k <= n, n >= 1". So in principle, plotting it in a given range for k and n should give a triangular array, not a square. In other words, if your image is a plot of the parity of mod(k I don't get what are the coordinates of the plot (clearly not k and n).
Hi Giovanni, Great question! As /u/EdgyMathWhiz suggests, the code always encodes the image by reading the sequence antidiagonals upward. (Even if the sequence is a triangle or is meant to be read by antidiagonals downward.) So the sequence A, B, C, D, E, F, G, ... gets interpreted as A C F . B E . D . G Similarly, my Twitter bot Parity Triangles Bot ( @oeisTriangles ) reads everything as a triangle. So the sequence A, B, C, D, E, F, G, ... gets interpreted as A B C D E F G . . .
Looking at the code, it fills it assuming a triangular layout. That is, it starts at the (0,0) corner and then at each step if its current position is (i,j) then: If i>0 move to (i-1, j+1) otherwise move to (j+1,0). (Or something like that - I might have flipped i and j).
That's right! I didn't do anything to the image after it was generated.
A048152 : Triangular array T read by rows: T(n,k) = k^2 mod n, for 1 <= k <= n, n >= 1. 0,1,0,1,1,0,1,0,1,0,1,4,4,1,0,1,4,3,4,1,0,1,4,2,2,4,1,0,1,4,1,0,1,4,... I am OEISbot. I was programmed by /u/mscroggs . How I work . You can test me and suggest new features at /r/TestingOEISbot/ .
[ "A puzzle about zero-knowledge proofs" ]
[ "math" ]
[ "mahm2d" ]
[ 4 ]
[ "" ]
[ true ]
[ false ]
[ 0.7 ]
null
Ahh.. i did not think of that... Also this would raise another issue. If one of the parties is malicious, then they can just keep agreeing to what the other says and note down all the bits of the other and probably brute force to determine the other's suspect... Dunno.
Decluttered version of this the Guardian's article archived on March 22, 2021 can be viewed on https://outline.com/RTgrPu
I prefer How to explain zero knowledge protocols to your children
I think the following would work: Both parties agree a sequence of hash functions f_1, f_2,...,f_n and bit positions p_1, p_2, ..., p_n. they both compute the hash values of the suspected person. Suppose x and y are the integer value of the suspected person. Then they compute f_1(x), ..., f_n(x) and f_1(y),..., f_n(y) Now they compare the bit positions: is f_i(x)(p_i) == f_i(y)(p_i), for all i It is probabilistic in nature and larger the value of n, higher the confidence. ​ If we used only a single hash function and exchanged the hash values, then technically we can brute force and reveal x and y, which we must prohibit...
Cryptography is largely predicated under the assumption that one way functions exist.
[ "Stephen Roman of Roman: Advanced Linear Algebra has a very extensive YouTube presence." ]
[ "math" ]
[ "m9xsdd" ]
[ 250 ]
[ "" ]
[ true ]
[ false ]
[ 0.98 ]
null
The following is an excerpt of his personal webpage: http://www.sroman.com/MyBooksForSale/Algebra/Algebra%20Series.php Abstract Algebra: A Comprehensive Introduction by Professor Steven Roman The four books Volume 1: Linear Algebra Volume 2: Group Theory Volume 3: Ring and Field Theory Volume 4: Order and Lattices form the basis for my YouTube lecture series with the same title. Here is the current status of the series as of March 21, 2021. The YouTube Lectures So far, I have posted about 50 lectures on linear algebra and am trying to post a new lecture every couple of days (barring illness or other issues). I cannot say how many lectures there will be in the linear algebra series, but as a very rough guess, the number could approach 100 or more. Once the Linear Algebra lecture series is done, I will do the Group theory lectures, then the Ring and Field Theory lectures and if there is enough interest, the Order and Lattice Theory lectures. Of course, this is a huge project and will take some time to complete. The Books Upon Which the YouTube Lectures Are Based As to the books, the Linear Algebra and Group Theory books are complete, but as you can tell from the lectures so far, I am constantly making changes to my work, guided in large part by the videos that I create. This includes fixing typos as well as improving the exposition, generally in minor ways. Throughout my professional life, I have NEVER been very satisfied with my work (including both my books and my lectures) and I always have--and probably will always--tweak my work every time I take another look at it. (As an aside, this is one of my many frustrations with commerical and academic book publishers. They do not provide an opportunity to improve my books unless I am willing to wait several years and then add enough NEW material to warrant a new edition! This hurts both me and my readers.) The Ring and Field Theory book needs a bit more attention before I would call it ready for public scrutiny. I have had several inquiries (through comments and emails) as to when the books will be available for purchase. Recognizing that it will be much easier to study the material with a copy of the book on hand, I am making the current version of the Linear Algebra and the Group Theory books available for sale (see below). I hope you enjoy them and thanks for your interest in my work. You will never know how much I appreciate it! If you do purchase the current version of one of my books, you will receive it immediately and when the lecture series for that book is complete, I will send you the final version. I am offering PDF versions of the books in this series for $29.95 each (plus tax if you live in California). For the tables of contents of the books, please click on one of the links below.
He was my linear algebra professor! I've got his book right next to me on the desk.
Very nice! That book was pretty tough for me use when I took my graduate linear course a few semester back. Gonna use this as an extra resource to review.
damn, I'm in mood for some umbral calculus. Sadly he hasn't youtubized that book ..yet
Nice, thanks
[ "Does Wavelet analysis make Fourier analysis obsolete?" ]
[ "math" ]
[ "mapg7p" ]
[ 2 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
What I mean to ask is this- can Wavelets replace Fourier analysis in everything, from Theoretical proofs to Practical applications? I am an electronics engineering student who is familiar with Fourier stuff (Fourier series, Fourier transform, Fast Fourier transform), but I have heard of Wavelets recently. That made me curious- are Wavelets superior to Fourier stuff in every way? Or are there somethings that only Fourier can handle, which Wavelets cannot?
Fourier transform diagonalizes differential operators, transforming linear PDEs into algebraic equations, which are easy to solve. I don't think that can be done with wavelets.
The short answer depends on how local the application/theory needs to be. If you want to look at stuff locally in time (think identifying notes in music), wavelets are probably going to be more useful. But if you want something over your entire domain (finding derivatives of signals), the Fourier stuff is better.
TL;DR: Nope...
Hi Ilyes. These are the top 3 resources I know of: The Fourier Transform and its Applications (Stanford) https://see.stanford.edu/Course/EE261 Digital Signal Processing 1: Basic Concepts and Algorithms (Coursera) https://www.coursera.org/learn/dsp1 Foundations of Signal Processing + Fourier and Wavelet Signal Processing: http://www.fourierandwavelets.org/
hey everyone! I've just joined this subreddit and I'm doing aerospace engineering we are meant to study Fourier analysis next year and I would like to ask are there specific books or websites or ways I can get a fundamental and thorough understanding of the Fourier analysis
[ "Tannaka-Krien Duality" ]
[ "math" ]
[ "maqezd" ]
[ 205 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
I recently did a project in my Lie Groups course on Tannaka-Krein Duality and haven't see any posts about the subject on here. I figured I'd share what I know about it, and hopefully those with more knowledge can chime in and share their perspectives. Duality is a topic that has sparked some recent discussion on this subreddit: and . The first time I, and I assume many others, learned about duality was in Linear Algebra. The dual space V is the collection of maps from the vector space V to the underlying field. For vector spaces if we have V, then the dual space V is known to be isomorphic to V. However there is something unsatisfying about this; it's not canonical, meaning we need to choose a basis. To get a canonical map we need to consider the double dual V the set of maps from V to the field, and then the evaluation map from V to V by evaluating a linear functional on a vector gives a natural isomorphism. This idea allows someone to reconstruct the original object from this collection of maps. Moving now to groups, specifically topolgical groups, we can get a similar phenomenon. Namely if we consider locally compact abelian groups then we can consider a dual object G = Hom(G, T) defined as the group of continuous homomorphisms from our locally compact abelian group to the circle group, the subgroup of norm 1 complex numbers. This has a special name, the , and in order to get a canonical isomorphism we need to consider the double dual. The canonical isomorphism is again given by the evaluation map, this time by evaluation a character on a group element. So one can say that the characters of a group determine the group. Now at the beginning I specified that our group needed to be abelian, what happens from nonabelian groups? Well anyone who has taken a course in Representation Theory will probably remember that character tables are not always unique. In the case of D_8 and Q_8 we have nonisomorphic groups with the same character table. So for nonabelian groups we need a different collection of objects to get a duality relation. It is at this point that I should confess that the book I learned this subject out of was Brocker and Dieck's Representations of Compact Lie Groups. They discuss Tannaka-Krein duality using Hopf Algebras, going off of Hochschild's approach. Through this project I learned that the original papers by Tannaka and Krein were done with Category theory, so I'll attempt to give some idea as to what this viewpoint is at the end despite my lack of knowledge of the subject. To start we need to talk about representative functions. If we consider the ring of continuous functions from a compact Lie group G to a field K, then we get act upon this ring via left or right translations: R: G x C(G,K) -> C(G,K) is the right translation defined by R(g,f)(x) = f(xg). An element of this ring of continuous functions is called a if it generates a finite dimensional G-subspace of the ring C(G,K). Moreover the set of representative functions form a K-subalgebra F(G,K). Aside: A theorem of Peter and Weyl showed that this subalgebra is dense in C(G,K). Now a la Pontryagin duality and the double duel, we need to consider the set of K-algebra homomorphisms from F(G,K) to K, called G_K, in order to get an isomorphism. Once again this isomorphism will be by the good old evaluation map, we evaluate a Lie group element on a representative function , f -> f(g). Then we send the Lie group element to the evaluation map on . The algebra of representative functions is a Hopf algebra when given the appropriate maps of comultiplication, counit and antipode, along with the algebra multiplication and inverse. Using comultiplication, and tensor products we can define a group structure on G_K, the inverse is given by the antipode map, and a product of a map with the coinverse gives an inverse for . We can define a topology on G_K by taking the weakest possible topology for which the evaluation maps from G_K to K are continuous. Which topology this is depends on the field K, but for the real numbers this topology is the finite open topology. The facts make G_K into a topological group, and the map G-> G_K gotten by sending to the evaluation map on an injective continuous map. Further work can be done to show that this is infact an isomorphism. This is only half the story, for the other half we need to construct an isomorphism of Hopf algebras from a certain Hopf algebra to the set of representatative functions from the group of algebra homomorphisms from to the real numbers, if one is considering the field K to the real numbers. The last sentence is very brief, but a full outline is in Hochschild. As mentioned previously both the original papers, and many sources I've seen define Tannaka-Krein Duality in terms of category theory, so I'll attempt to give an overview of this. This is a more broad case, covering any compact topological group. We don't need to have a smooth structure. Comments and corrections are much appreciated! For this part we don't consider algebra of representative functions, and attempt to reconstruct G from it, rather we attempt to reconstruct G from its category of representations (G) over the complex numbers. To do this we again need to consider a map a la the double dual, and the one we care about is the forgetful functor F: (G) -> . This forgets the representation structure and only remembers the vector space part. From here we consider a natural transformation from F -> F, so a 2-morphism between the forgetful functor. Every element in compact topological group G gives rise to a natural transformation from F(V) to F(V) as multiplication by whenever V is a representation. We have three important properties of this natural transformation: It preserves tensor products, it's self conjugate, and it's the identity map on the trivial representation. Given this natural transformation, we can consider the collection of all such natural tranformations, Aut(F), and this is the object that is isomorphic to our compact group G. This is the theorem that Tannaka proved, and Krein expanded upon it by specifying which categories are of the form Aut(F). Hopefully this was coherent and somewhat interesting. I really enjoyed learning it, and would love to hear others viewpoints on the subject, or resources to learn more about this in other contexts.
Nice post. One note is that it's not true that if G is a locally compact abelian group then G is isomorphic (canonically or not) to G. For example, if G = T then G = Z. In general the dual of a compact group is discrete and vice versa, so the only time G and G can be abstractly isomorphic is when G is finite (in which case they are always abstractly isomorphic). Also, you can interpret the duality that you state as saying that all compact Lie groups are algebraic: given a (commutative) Hopf algebra A over the real numbers, there is a canonical structure of group on the set Hom_R(A, R), and the duality you state is that if G is a compact Lie group and A_G is the associated Hopf algebra, then there is a canonical isomorphism from G to Hom_R(A_G, R). But A_G is finitely generated as an R-algebra, so there is a surjection of R-algebras R[x_1, ..., x_n] \to A_G. Geometrically this basically says that Hom_R(A_G, R) is a subset of R defined by some collection of polynomial equations, and the topology is induced by the underlying analytic topology on R You can do better and show that in fact Hom_R(A_G, R) is a subgroup of GL_n(R) defined by a collection of polynomial equations, a condition saying that it is a "linear algebraic group". [For the experts, Hom_R(A, R) is the set H(R) of R-points of the affine group scheme H = Spec A, given the analytic topology. This is a Lie group if A is finitely generated over R, and it is compact precisely when H is "anisotropic", i.e., when it does not contain G_m as a closed subgroup. In fact, there is an equivalence of categories between connected compact Lie groups and anisotropic connected reductive groups over R. A bootstrap of this using the theory of complexification and maximal compact subgroups shows that all complex Lie groups are algebraic, a rather important fact.]
In general the dual of a compact group is discrete and vice versa, so the only time G and G* can be abstractly isomorphic is when G is finite ( A note on your note: some groups are neither discrete nor compact, so you can be self-dual without being finite. In particular R is self-dual (the Fourier transform of a function on R is a function on R whereas eg a function on the circle has Fourier coefficients which are a function on Z). More generally, any local field (eg the p-adics) are self-dual and so are slightly more exotic things like the ring of adeles.
Sorry to be that guy, but it's Kr n, right? The same Krein as in the Krein–Milman theorem ?
Yes you're correct! I'll edit the post.
Really great summary! Such an interesting topic. From what I remember, I think Deligne has a paper in The Grothendieck Festschrift where the category theory bit is expressed in terms of the Barr-Beck theorem. It's an interesting read, if you haven't already read it.
[ "Interesting topics in graph theory" ]
[ "math" ]
[ "mapc1a" ]
[ 17 ]
[ "" ]
[ true ]
[ false ]
[ 0.96 ]
In my graph theory class, we have the option to give a lecture (~80 minutes) on material we find interesting. Any suggestions? I find almost all areas very interesting so it isn't obvious what I should pick. For reference, I have covered basically all of Diestel (except for infinite graphs), plus other topics like graph labelings, Hadwiger's conjecture, Erdos-Lovasz Tihany Conjecture, extremal functions, and probably some other things I'm missing. Thanks! Thanks for all the great suggestions.
I think spectral graph theory is super cool. There's all these really interesting analogies with functional analysis, there's a spectral description of those graphs that correspond to root systems of exceptional Lie groups. Just the tip of the ice berg. I'm particularly fascinated by the "missing" Moore graph, which originally came about from spectral investigations of Moore graphs. EDIT/UPDATE: OH! I can't believe I forgot, Ramanujan graphs are also super cool. They're efficiently connected from the perspective of spectral graph theory, but also have ties to super deep theorems in number theory. Also, check out zeta functions of graphs.
I can definitely second spectral graph theory, especially if you want to do some visualisations. There's a lot of fun stuff there with spectral graph drawing, random walks, and discrete differential equations. The other thing I can recommend is (Co) homology of 0-1 simplicial complexes. It's basically just an algebraic way of describing hypergraphs, but it can be presented quite quickly and you can recover many familiar objects of graphs (connectedness, expansion,...) using purely algebraic means. For the right audience that can be a pretty entertaining lecture.
Seconding Spectral Graph theory - there's a fantastic wealth of material to explore there. I've always been interested in spectral clustering and the relationship between the Eigenspectra and global topology of the network.
Here are a few: Erdos-Faber-Lovasz Conjecture, Fractional coloring numbers, computational complexity of the graph isomorphism problem, edge coloring and Vizing's theorem.
Hilarious
[ "Is my class really intense, or is it just me?" ]
[ "math" ]
[ "mahpdk" ]
[ 2 ]
[ "" ]
[ true ]
[ false ]
[ 0.67 ]
For one of my first year, first semester classes, we're going from basic sets, logic, and algebra, all the way to integration, multivariate calculus and optimisation, with linear algebra thrown in at the end. I'm keeping up pretty well so far, but am I right in feeling like this is a really dense course, and a bit hammered by it, or is it actually pretty standard?
This sounds like an overview class, but in general, it's normal that you need to get accustomed to the speed university classes are moving. My experience comes from Europe (I'll guess you're in the US based on reddit user statistics), and here the math classes usually start from scratch, but you pretty much cover the stuff you learned in school (multiple years) in a few weeks. I imagine it's not that different in the US.
I agree with the other poster. I’m also in the UK. My experience isn’t super typical, but I think you can expect to spend the first semester covering everything you would have learnt at school at a very rapid pace and then start new material.
It's not unusual for a course to start with a whirlwind tour of various things that you're expected to already know, but that the instructor is worried you don't have fresh in your mind. Then you get to the real meat of the course and, while the pace might not change, you'll stop covering so many topics and just drill down into one thing for a few classes at a time. I think I was taught what vectors were at least seven times as an undergrad. If you learned the whirlwind stuff before then you will need to put in a lot of work to get caught up; that can be very hard.
Australia, uni, economics degree
First year of what? High school, college? Which country?
[ "Finitism, Sets, Math (naive ideas)" ]
[ "math" ]
[ "maigpp" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.5 ]
I got swayed by finitism a couple years ago, and I find myself uncomfortable with a lot of the math I'm learning. I'm only beginning an undergrad math degree, (just started real analysis) so I do not have much platform to speak on, but I've just found myself wondering... I don't really like the idea that math, in its current state, with all its infinities, can't easily be implemented into the real world. It seems to depend on a human's intuition and interpretation of some kind of spoken/written language text. It also feels like it isn't concrete or rigorous enough to be readily applicable to the real world. For some reason, I suspect a highly concrete or rigorous system won't really be able to prove anything, not sure why, but I still feel like we can do better. That's not to say I only want applied math--I'm just thinking, if we're going to abandon reality, then why don't we study other mathematical systems? Why all the same stuff, with sets and ZF? Why are other systems so rarely studied? If we're going to invest most of our energy into one system, why not make it one that more closely models reality? The experience to mathematicians will be the same. Maybe I'm shortsighted (certainly?), but also, things like the Banach-Tarski paradox just seem completely silly. What relevance does cutting up a pea and rearranging it into the sun have... to anything?
First, to allay some of your concerns about rigor: there is almost certainly no approach to any field of study in the history of human civilization which is more rigorous than the modern axiomatic approach to mathematics, and pretty much nothing could be more thoroughly separated from things like "a human's intuition and interpretation of some kind of spoken/written language text". Indeed, the entire shift to axiomatic mathematics in the late 19th century and early 20th century was due to concerns about rigor. It is true in some sense that a highly rigorous system won't really be able to prove anything, but only if by "prove anything" you implicitly mean "prove anything ". Actually predicting how things will happen in the real world is messy, and relies on things like empirical data which prevent you from ever achieving absolute certainty or proof. Science has dealt with this dilemma by accepting standards lower than absolute certainty; mathematics has done so by not directly making any predictions about reality. Instead mathematics takes what you might call a conditional approach, saying " you are dealing with a system that works according to these assumptions, we can guarantee that it will behave exactly like this". Figuring out which assumptions apply is not part of mathematics, but rather part of physics or biology or whatever field you may be applying mathematics to. The Banach-Tarski paradox doesn't have any relevance to reality simply because none of these scientific applications of real analysis involve anything having to do with nonmeasurable sets - unlike many other parts of real analysis which have found widespread applications in many disciplines. On another topic entirely, though, you don't have to worry about being restricted to ZF. Very, very few people actually work with ZF specifically. Mostly people just work on whatever they work on, with the implicit understanding that in theory everything they are doing could be reduced to ZF. But that's just because ZF happens to be a very powerful theory that can construct models for lots of different things. If ZF (or at least modest extensions of ZF, such as Tarski-Grothendieck set theory) model absolutely everything people cared about working on, something else would have been chosen as a foundational theory instead.
It seems to depend on a human's intuition and interpretation of some kind of spoken/written language text. It also feels like it isn't concrete or rigorous enough to be readily applicable to the real world. Why all the same stuff, with sets and ZF? Why are other systems so rarely studied? There are some common (even among some mathematicians) misunderstandings here. First, formal mathematics is (nominally) quite concrete and entirely finite. A deduction from ZFC is a finite sequence of symbols, and its accuracy can be mechanistically checked. Writing proofs at that level of detail is a huge pain, so people usually abbreviate them by using more ordinary intuitive reasoning, but a lot of work has been done confirming that those ordinary language arguments really are abbreviations for formal arguments in ZFC (or other formal systems). The reason that there's not a lot of concern for the precise formal system is that they're largely interchangeable. A formal system like ZFC is something like the machine language of mathematics; we're able to encode everything we want to into it. It's convenient to work with a system that's very abstract precisely because it means that it doesn't get in the way of writing down the mathematics we actually need. I don't really like the idea that math, in its current state, with all its infinities, can't easily be implemented into the real world. Math isn't nearly as dependent on infinity as it appears. Statements about infinity frequently have reinterpretations as concrete, finitary computational information. Maybe I'm shortsighted (certainly?), but also, things like the Banach-Tarski paradox just seem completely silly. What relevance does cutting up a pea and rearranging it into the sun have... to anything? I'm not sure what you mean about "relevance". It's fundamentally a result about embedding the free group with 3 generators into the space of Euclidean motions. Despite what some poorly thought through popularizations would tell you, it has nothing to do with peas or the sun.
We have proof assistants like Coq or Agda which use some very infinite type universes, but nonetheless from each proof of an existential statement in Coq you can extract a finite program that terminates and computes you an element satisfying your existential statement. There is a large amount of constructive mathematics that uses infinities but has computational content. Also, there is a lot of finitary mathematics that has no clear content, like anything involving Grahams number or TREE(3). So I think the question of whether a piece of mathematics is easily applicable is a very separate issue from the question of whether or not it contains infinites. I have a lot of respect for constructivism and for ultrafinitism. But the kind of finitism that accepts TREE(3) but not aleph_0, that is just nonsense in my opinion.
Continuous mathematics was modeled to explain the real world and works surprisingly well. If you really think non-finite systems are problematic, then I suggest trying to find an explicit example where this is a problem. I have yet to see a compelling example where these foundations lead to incorrect answers when applied to real problems.
If you are real analysis then I suggest you put these reservations about infinity on hold for a semester or two. Some people think these issues are very serious, but most people don't. Study with an open mind. You say you were "swayed." What does that mean?
[ "Can you draw a line parallel to the line given using only straightedge?" ]
[ "math" ]
[ "mahmsj" ]
[ 11 ]
[ "" ]
[ true ]
[ false ]
[ 0.71 ]
I am a Japanese high school student interested in mathematics. Sorry if my English is bizarre. I've been thinking about straightedge and compass construction recently. And here is the question: Can you draw a line parallel to the line given using only a straightedge? I discovered a way to draw a line segment with length of any rational multiple of the line given, using a compass only one time. And I found that the two proposition below is equivalent. I wondered if the proposition 2 is true when you cannot use a compass. So, I made this post. I think it is impossible to draw a line parallel to the line given using only straightedge. I would appreciate it if you answered my question with the proof or a link where I can read the proof.
You can't. Assume you can draw a parallel to a given line. Now, imagine you have a circle already drawn and you don't know the center. Draw a line L that cuts it. Draw a parallel line L' to L that also cuts the circle. You have 4 points of intersection with the circle. Imagine to draw the quadrilater and draw the two diagonals. The two diagonals will meet in a point X. Now draw a third line L" parallel to L that also cuts the circle in two other points. Repeat the construction of the quadrilater using the intersections of L" and L with the circle. Again draw the diagonals. These meet in a point Y. Now, it should be easy to prove that the line connecting X and Y is a always a diameter of the circle, i.e., it passes from its center. If you repeat this construction with other 3 parallel lines M, M', M" which are not parallel to the first 3 lines, you will get another diameter, different from the first one. The point where these two diameters cross is the center of the circle. So we have proved that given a circle, if we are able to draw parallel lines, then using only the straightedge we can find the center of the circle. But this is impossibile because Jacob Steiner proved that For a proof see here .
I know, but imho OP is not "breaking rules" (as in a fun puzzle), OP is asking a serious question about what can be done with straightedge alone.
A straightedge, as the name implies, has —one, not two. The other side might have any shape.
I recommend you to play a game called Euclidea if you’re interested in straightedge and compass construction.
Thank you for your easy-to-understand explanation!
[ "Making the leap to Putnam" ]
[ "math" ]
[ "ma71x2" ]
[ 22 ]
[ "" ]
[ true ]
[ false ]
[ 0.83 ]
Hello, I’m a sophomore studying physics and math (in that order). I’ve taken all the undergraduate analysis and algebra courses available to me. Recently, I’ve been thinking about entering into the Putnam for this upcoming year. I open Putnam and Beyond and literally the 3rd question in the book has me stumped. I have no prior competition math experience but I do have a lot of free time. If the 3rd question is already difficult, do I even bother preparing for the exam? Is that some sort of indicator that I simply will never be good at competition math? Or should I gradually make my way into Putnam territory by exploring other resources instead? Any advice on how to make the leap, what resources to use, where to start, whether I should even bother, etc. would be appreciated. I don’t need to do well; although it would be nice to do well. I just need to get better at this kind of math.
One thing to keep in mind: the median score for the Putnam is often zero. It's a hard test, far harder than any regular exam, and getting to the third question means you're already doing way better than a lot of people. Being stumped is normal. Kiran Kedlaya has a list of previous contests and solutions on his website, so you can look there if you're feeling stumped. Make sure to try the problems for a bit yourself, but if you're just not getting something, it's better to look at the solution than to be stuck forever.
I took it a few times. My best score was around 25. I think at the time, around 50% of test-takers scored 0. You can look up score statistics for previous years.
To be good at competitive math, I think you basically just have to practice. Spend a lot of time working on problems and don't look at the solution until you've been thinking about the problem for a while. Then take the time to really understand the solution and the "tricks" that they use. Learning how to identify and use these tricks can make some competition problems much easier. If it makes you feel better, the first time I took the Putnam I had no idea what I was doing and didn't get anything right. But I practiced for the next year and did much better the second time.
I would recommend starting lower down if you haven't done any competition math. Like honestly maybe even start at middle school level and look up some national mathcounts problems, then work your way up to high school level (AMC->AIME->USAMO). Then Putnam should be well within reach (I honestly found Putnam to be easier than USAMO).
I took the Putnam exam as a senior majoring in Mathematics (B.A) and Economics and scored a 30. My only experience with competition mathematics was a six week elective course, and I would say that course prepared me better than most of my analysis courses. Obviously having a thorough understanding of the basic undergraduate mathematics courses will help a lot, however competition math is about creative solutions and techniques. The book you're using is a great resource to begin with, as well as looking at previous exams as others noted. If you browse a few solutions to past putnams then you will begin to understand what I mean by "creative solutions", and practicing those problems is great experience. I'm pretty sure when I started Putnam prep I couldn't get past the first problem in my book, so I think you're at an excellent start.
[ "Hoffman Kunze" ]
[ "math" ]
[ "m9vcz1" ]
[ 16 ]
[ "" ]
[ true ]
[ false ]
[ 0.91 ]
After taking 2 courses on linear algebra (1 computational and 1 proof based), I decided I wanted to go through Hoffman Kunze to read through the theorems and practice the exercises to hone my skills. However, I do not as of now want to get into chapters 4 and 5 (time constraint and such). Is it OK to skip them, and just go through chapters 6 to 9? (after finishing 1-3)
Chapter 4 is definitely important for understanding further chapters 6 and onwards. I don't think you need to read chapter 5 if you already know what a determinant is and how to compute it.
I remember I initially skipped them as well but think I then quite quickly ran into a wall where I needed both in chapter 6. I don't remember needing the last 3 sections of chapter 5 for chapter 6 though (since I skipped them). I didn't do 7-9 so can't say if you need them there. But if you did determinants recently in your courses (I hadn't done any linear algebra properly for years so needed a refresher) you could probably skip 5. Also I will say that both 4 and 5 went far more quickly/easily than I had anticipated when I originally skipped them. In fact, I'd just try to go to 6 without either and see if you get further than I did because you've just taken courses on the stuff anyway?
If you've taken an algebra course and are comfortable with polynomial ideals, you can probably skip 4 and 5. If not, then you'll want to have a solid understanding of 4. Chapter 5 can mostly be skipped, but it's worth at least understanding the determinant in terms of permutations if you don't already. Everything from modules onward is extra and doesn't get used until the end if I recall correctly. Personally, I feel as though a lot of the more advanced content involving modules etc in H&K is better off saved for an algebra course.
4 seem to focus a lot on vector space of polynomials (both finite and infinite-dim) and 5 appears to delve way deeper than my previous course into determinants. I am currently taking numerical analysis which covers a lot of 4, so I think can skip it for now.
Thanks for the info. I'm in a similar situation as you actually since I haven't done linear algebra for quite a while, which is why I'm doing this. I will see how far I can go.
[ "General disjunction rule" ]
[ "math" ]
[ "vcte69" ]
[ 0 ]
[ "Removed - ask in Quick Questions thread" ]
[ true ]
[ false ]
[ 0.4 ]
null
the event A happening includes the case of B happening and vice versa, meaning the event A and B is counted twice. you gotta subtract so that it only is counted once.
a still happens even if b happens with it
independent does not mean mutually exclusive. think of flipping a coin and landing on heads and rolling a dice and landing on 4. these are independent from each other, and you can land on heads and roll a 4 at the same time.
Events in probability are sets by definition. P - the function for measuring probability are measure of event in some set that contains all of events. For example: Let our universe be a square on a plane. Let A and B be circles that intersect each other and lie in our universe. Let P be a measure of area of the figure divided by area of our universe. Let A or B be figure such that points on it lies in either A or B circles. Let A and B be figure such that points on it lies in A and B at the same time. If we calculate P(A or B) as P(A) + P(B) we can see that we count intersecting part twice so we subtract it. Check out Euler diagrams for visualisations and stuff.
Thank you, it helped the most to understand, I suppose. I am still a bit fuzzy with being completely sure with understanding (when I try to do it without visualization) but I think I understood it.
[ "How can a mathematical statement be proven by one proof method but not another?" ]
[ "math" ]
[ "mal2dj" ]
[ 8 ]
[ "" ]
[ true ]
[ false ]
[ 0.75 ]
I am a non-math person taking a "Foundation of Mathematics" course and my professor said a given proposition can be proven by one proposition but be impossible to be proven by another proof method. How is this possible? And wouldn't the truth of such a proposition be questioned if it cannot be proven by one or more proof methods?
Just like not all ways can get you out of a maze.
As a simple example, some statements can be proven by exhaustion of cases, if there are a finite number of cases. But this doesn't work if you have infinite possibilities; you need a stronger method like induction. And then if you have uncountably many cases, regular induction won't work, so you may need to turn to transfinite induction (much to the annoyance of my real analysis TA). And also, generally, if you have a theorem that can only be proven using a certain axiom (because without that axiom it can go either way), then trying to prove it using a technique that doesn't make use of that axiom somewhere is doomed to fail.
And wouldn't the truth of such a proposition be questioned if it cannot be proven by one or more proof methods? In mathematics, only one proof of a proposition is required for the proposition to be proven true. Also, failure to prove a proposition does not mean that the proposition is false.
Think of your proof methods as tools in a toolbox. For some jobs, more than one tool can do the job. For others, one tool may be completely unsuited for it (e.g., attempting hanging a picture with only a saw). For more complex jobs, you often require multiple tools. Constructing proofs is similar.
The person you are referring to is totally correct. The key fact behind the standard proof is indeed the fact that the natural numbers are well ordered aka what makes induction work.
[ "Brilliant series on the history of mathematics" ]
[ "math" ]
[ "ma6xnl" ]
[ 607 ]
[ "" ]
[ true ]
[ false ]
[ 0.98 ]
This is a very interesting series I have been watching for a long time on the development of mathematics. He is VERY good at explaining math and how it developed over the millennia. Be warned though, this is N.J Wildberger. He is very well known to not very much like the idea of infinite sets and real numbers. He usually doesn't let this get in the way of actually explaining concepts that include infinity (e.g: the history of calculus), but it does come out at least when he talks about the criticisms of Calculus throughout history. Other than that, to me it is perfect.
Thanks for this. The one thing I felt was really missing from my math education was the history behind the various theorems/proofs: what major problems were being worked on, state of math at that time, problems that led to this being discovered, motivations, etc.
For context I'm just a PhD student in stats with a masters in pure mathematics and an interest in philosophy (I'm not a finitist btw). With that in mind, I'll try to describe his position as I understand it. The point of his position is not specifically about finding contradictions in ZFC or mathematics involving infinity (although one could try that approach). It's beyond that, in a sense: it concerns itself with what counts as meaningful mathematical discourse. Let's take as a quick example the fundamental theorem of algebra (this is a video by him). Let's consider two polynomials: p(x) = x^2+x and q(x) = x^5-2x+3. The first one has two real roots: -1 and 0. The second one has one root: since it's not a rational number, we may call it, let's say, r, and proceed to do other things as we please. Perhaps God knows what this number is precisely, but us mere humans can only accept finite approximations. Well, that's life. This sort of sentiment, that some mathematical objects are simply not fully knowable because of our human limitations, is often critized by Wildberger and other finitists (such as Wittgenstein), since they tend to see mathematics more of an invention than a discovery (I'm oversimplifying things here). From his webpage We have politely swallowed the standard gobble dee gook of modern set theory from our student days---around the same time that we agreed that there most certainly a whole host of `uncomputable real numbers', even if you or I will never get to meet one, and yes, there no doubt a non-measurable function, despite the fact that no one can tell us what it is, and yes, there surely non-separable Hilbert spaces, only we can't specify them all that well, and it surely possible to dissect a solid unit ball into five pieces, and rearrange them to form a solid ball of radius two. And yes, all right, the Continuum hypothesis doesn't really need to be true or false, but is allowed to hover in some no-man's land, falling one way or the other depending on . Cohen's proof of the independence of the Continuum hypothesis from the `Axioms' should have been the long overdue wake-up call. In ordinary mathematics, statements are either true, false, or they don't make sense. If you have an elaborate theory of `hierarchies upon hierarchies of infinite sets', in which you cannot decide whether there is anything between the first and second `infinity' on your list, . So, there are some points in mathematics where there's a divergence of sorts in how accessible certain objects are in a way. We deal with that simply by calling them different names: finite vs infinite, computable vs uncomputable, decidable vs undecidable, true/false vs independent, and so on. His idea of mathematics is more strict than ours. Again, I disagree with him. But I believe his ideas can make we think about things we take for granted in our day to day lives.
well it might be ad finitum if wildberger had anything to say about it
Finitism is a perspective on philosophy of math that slides into total crackpot stuff very easily. Wildberger is a mathematician with a strong understanding of the subjects relevant to his beliefs about infinity while laypeople are not. So a listener is much more likely to end up with crackpot beliefs about finitism than Wildberger's very unusual but not "wrong" philosophical stance.
What are people opinions about his views on modern mathematics. He doesn’t believe in infinity and other related topics
[ "integral" ]
[ "math" ]
[ "vchkki" ]
[ 1 ]
[ "" ]
[ true ]
[ false ]
[ 0.67 ]
null
Differentiate ln(x') using the chain rule.
Integral u'/u du = ln(u)+C, no matter what u is
*Integral u' / u dx = ln(u) + C, assuming u is function of x. (As we then have u' dx = du and it becomes Integral 1 / u du)
Yeah you can either differentiate ln(x') using the chain rule ln(x')' = 1/x' . (x')' = x''/x' but you can also make a change of variables Integral x''(r) / x'(r) dr Let u = x'(r), we have du = x''(r) dr, the integral becomes Integral 1 / u du = ln(u) + C = ln(x'(r)) + C
That's why I wrote du ;)
[ "How can I get better at learning math?" ]
[ "math" ]
[ "vcecil" ]
[ 5 ]
[ "Removed - ask in Quick Questions thread" ]
[ true ]
[ false ]
[ 0.67 ]
null
Two things: 1) Consistency is key! (I cannot stress this point enough) 2) Try to escape the dry “gotta learn this” mentality - people learn much better when they want to learn, when there is value associated with the material to be learned. Find some related applications or topics of discussions that you truly enjoy. You could join a society, tech/math newsletter, etc. to get an idea of such things. Bonus: Examples, examples, examples. If you don’t like the examples or find they don’t have value, try to cook up your own - even if you don’t have great results, you are thinking mathematically.
Use alcumus- start at prealgebra and work your way up
How to Excel at Math and science by Barbra Oakley
For starters, sign up for Khan Academy . Do the Math section. Start at the beginning. It'll be worth it.
Thank you
[ "Hi guys! I used to be good at math in highschool, but ever since I started college, I'm not good with it." ]
[ "math" ]
[ "vcds4f" ]
[ 3 ]
[ "Removed - ask in Quick Questions thread" ]
[ true ]
[ false ]
[ 0.64 ]
null
In my experience, I had to shift to using memory less and logic more. But this is more time consuming.
There’s a technique called reconnaissance. The idea is to get book as early as possible, and begin going through it. Just read through the book, avoid to learn anything, just do it for fun of it. Maybe try going over 1 or 2 problems in examples section. If it makes no sense, try again. Give it some time.
I had that problem between highschool and college too. Khan academy helped me a surprising amount. And honestly, YouTube videos and the internet in general are super helpful resources.
First of all, building patience is probably the most important thing. Math can be tedious and most of the times it’s very frustrating. Second, use youtube. Especially in first year, everything is explained hundreds of thousands of times online, you should be able to find a source that fits you. Third, your exercises always have a direct correlation to the material you got taught, so just try to look in your lecture script to find a point to start. For example if you need to prove the derivative of the sine, you need the definition of the derivative or if you need to prove that eigenvectors are orthonormal towards each other, you need the Eigenvector equation. And one final thing, first semester is always the worst, it will get better.
Getting a tutor is one of the best things that you can do. If you work hard on the homework exercises the day before meeting with the tutor, that will help. I would suggest three hours a week of tutoring if you are taking one math course.
[ "Whats wrong with this math behind roulette?" ]
[ "math" ]
[ "vcfyqe" ]
[ 0 ]
[ "Removed - ask in Quick Questions thread" ]
[ true ]
[ false ]
[ 0.29 ]
null
Completely asinine
Well definitely don’t go to the casino on a Tuesday, because then your chance of winning is 1/2*1/2*1/7 which is about 3.5%.
The first part isn't saying the whole story. If you choose black you have 50% of getting black, but also if you choose red you have 50% chance of getting red, so 25%+25%=50%
You’ll bankrupt the casinos with this knowledge. I suggest you delete your post immediately and head straight to Vegas. I’d wish you good luck but we both know there is no such thing as luck.
Switching gives you a 2/3 chance of winning, because there's a 2/3 chance that you picked the wrong door the first time, and , switching guarantees winning.
[ "Squaring the Circle Thought and Question" ]
[ "math" ]
[ "vc76i0" ]
[ 0 ]
[ "Removed - ask in Quick Questions thread" ]
[ true ]
[ false ]
[ 0.5 ]
null
What does a loop of string have to do with compass and straightedge constructions?
No. Different convex shapes with the same perimeter can have different areas. For example, a 1x3 rectangle and a 2x2 rectangle both have perimeters of 8, but areas of 3 and 4 respectively. For a more direct example, the area can be made to be near 0 by making the shape very long and skinny (imagine a 0x4 rectangle).
Unfortunately, your submission has been removed for the following reason(s): /r/learnmath books free online resources Here If you have any questions, please feel free to message the mods . Thank you!
Separately, drawing a circle and a square with the same perimeter is impossible (when the only tools allowed are straightedge and compass), because you still need pi
oh I understand now, I misunderstood the need for pi and thats what people were avoiding using.