title
list
subreddit
list
post_id
list
score
list
link_flair_text
list
is_self
list
over_18
list
upvote_ratio
list
post_content
stringlengths
0
20.9k
C1
stringlengths
0
9.86k
C2
stringlengths
0
10k
C3
stringlengths
0
8.74k
C4
stringlengths
0
9.31k
C5
stringlengths
0
9.71k
[ "How can you write the number 34 using five 3's? Is it even possible?" ]
[ "math" ]
[ "ytfj4l" ]
[ 1 ]
[ "Removed - ask in Quick Questions thread" ]
[ true ]
[ false ]
[ 1 ]
null
Sure: 33 + (3/3)
If factorials are allowed: (3!)(3!)-(3+3)/3
That gives out 33, not 34
That gives out 33, not 34
This feels like the best answer
[ "How can I visualize a new algebra concept that I have come up with?" ]
[ "math" ]
[ "yt8x8e" ]
[ 0 ]
[ "Removed - ask in Quick Questions thread" ]
[ true ]
[ false ]
[ 0.45 ]
null
I'd start with pen and paper.
LaTeX might be able to. But keep in mind that most program visualize math in the traditional sense and not the abstract.
Visualizing algebra could mean literally anything. Some options that might work for some cases: https://q.uiver.app/
I have been using pen and paper for some time now.
Thank you very much.
[ "Silly ArXiv game a colleague and I made. From just the titles and abstracts, can you guess which article is more recent?" ]
[ "math" ]
[ "yu6vw8" ]
[ 131 ]
[ "" ]
[ true ]
[ false ]
[ 0.98 ]
null
Considering training a neural network to play this game
got to 11 on my second try. I'm clearly making nonrandom decisions somehow, but i couldn't tell you what I'm basing them on.
Just a suggestion: you might want to make it so articles are minimum 5-10 years apart. Otherwise it can get pretty tough
Unfortunately, the site doesn't really work on mobile yet. Just something that we threw together quickly for desktop.
SAME!! I have no idea what I was reading but I got 10
[ "Graph-Isomorphism in polynomial time" ]
[ "math" ]
[ "yu66xf" ]
[ 124 ]
[ "" ]
[ true ]
[ false ]
[ 0.92 ]
null
The general consensus is that graph isomorphism is unlikely to be NP-complete. If it is NP-complete, the polynomial hierarchy would collapse, and we really don't expect that to happen. That said, I'm skeptical this is accurate. There are known classes of very hard to distinguish graphs, and I'm not seeing any indication that this really grapples with them. And they also explicitly acknowledge that trying to use the behavior of walks to distinguish graphs is one that has been studied for a very long time, but they don't seem to explicitly say what the big idea is that they've managed to make the work. And the only really big new ingredient seems to be the binding graphs, which don't seem to be deep enough to really be that powerful. But this is at least a serious attempt by someone with a clear understanding of the literature. (Also disclaimer: I'm a number theorist, not a graph theorist, with only a single graph theory paper ever, and I haven't looked at this preprint in great detail.)
Would you say something like that about a Picasso? We do math for math’s sake.
Chemical databases usually have to solve something like graph isomorphism if they want to support looking up entries by structure.
isomorphism problem (checking whether G is isomorphic to a subgraph of H) is NP-complete.
It's not known whether it's P or NP complete, the best known algorithm right now is quasipolynomial. That being said, I've only skimmed the paper and I don't buy it either.
[ "If two square matrices have the product Identity Matrix, are the two square matrices always inverses of one another?" ]
[ "math" ]
[ "ythum9" ]
[ 1 ]
[ "Removed - try /r/learnmath" ]
[ true ]
[ false ]
[ 0.67 ]
null
I disagree slightly an inverse by definition is a two sided inverse, it just turns out to be the case that a one sided inverse is a two sided inverse in the case of linear maps between vector spaces of the same (finite) dimension.
Yes, this is the definition of inverse, and inverses are unique.
This is true for finite square matrices, ie. linear maps between finite dimensional vector spaces of the same dimension. For maps between infinite dimensional vectors paces this is not true, as in that case one has to check if the inverse works from both sides! There are some simple examples where the AB=1, but BA!=1 (the left/right-shift maps from "Hilbert's hotel").
The reason that matrices are special in this case (as other users have given examples of other situations where this isn't the case), is because there's this really nice equivalence to invertibility that is given by det(A) =\= 0. If AB = I, then det(A)det(B)=1, so both A and B have to be invertible. Existence of these inverses guarantees that A and B are both two sided inverse of each other. In other cases you can have AB=I, but one or both does not have a 2 sided inverse.
That’s the definition of an inverse matrix.
[ "What are you thoughts on MSE as of Oct 2022?" ]
[ "math" ]
[ "ytmuon" ]
[ 5 ]
[ "" ]
[ true ]
[ false ]
[ 0.73 ]
Maybe some questions to lead the discussion:
I do like using mean squared error but I don't think it should be used for everything
Some of the attitudes of newer mods is pretty obnoxious. I've gotten into it with one of them a few times over really inconsequential stuff that they felt very strongly about. A few power users also feel very strongly about everything and cannot handle people disagreeing with them on anything, but that isn't unique to MSE. The site overall is thankfully mostly unchanged over years. EoQS was mostly a good thing which gave regular users a little more power in closing really low effort questions. An annoying change is that certain power users now feel compelled to not only close questions but delete them as well, double penalizing new users who aren't used to the site. One of said power users was banned for a couple years. Feels like a gross over correction meant to cause issues for the site and the mods.
Boring homework questions with a user base that gets dopamine hits from answering boring homework questions. Doesn't matter how specialized the tags are; the community of people that ask and answer interesting questions has left.
probably true, but I feel like MO has kept its quality although the questions are hardly comparable.
There was because it was too draconian at first, but with enough user push back it was pared back a bit. I think it hit a nice medium a few months later. That said, some mods started wading into closures too much (closed pretty much every question that didn't explain everything, even posts by people who just didn't know how to explain themselves), and it caused a lot of issues. Many slap fights held in Meta.
[ "You have the opportunity to teach Mathematics at the Pythagorean school (530 b.C.). What would you teach them among the things they don't yet know?" ]
[ "math" ]
[ "ytyez0" ]
[ 506 ]
[ "" ]
[ true ]
[ false ]
[ 0.96 ]
null
That √2 is irrational. Wait why are you taking me to a boat
0 (Positional number representation)
I read a book about the history of mathematics once (by Carl B. Boyer, it's pretty dry, I don't recommend it), and what it really highlighted to me was that the development of mathematics was largely about conceptual and philosophical leaps. "Oh, I can do that!" and "no wait, I can't do that", in equal measure. So the question might be, what would they accept? What could push them towards a better understanding of mathematics without sounding like utter lunacy and getting me thrown of the boat and drowned? I don't have an answer to that. :/
You will be drowned before you get anywhere near set theory. EDIT: Okay, people already made the drowning jokes below. I thought I was clever. Fine, I'll teach Pythagoras Inter-universal Teichmüller theory. Maybe he can make sense of it.
That is obviously the first thing they missed. Also has a lot of practical uses that could have advanced science by a few millennia. After that, I think set theory and logic, working my way up to Gödel's theorem would surely shatter their world, considering the difficulty they had facing the existence of irrationals. Oh, and showing the impossibility of squaring the circle, showing that pi is transcendental. But here, I may have to revise the proof.
[ "Is there a version of Logic that allows statements with values beyond true and false?" ]
[ "math" ]
[ "yu21oe" ]
[ 77 ]
[ "" ]
[ true ]
[ false ]
[ 0.93 ]
As the title says. Elaboration: there are statements that fundamentally admit neither a true nor a false label, such as the famous Russell's paradox, or the self referential "This statement is false". The typical approach to such issues in mathematics is, once encountered, that the axioms are adapted (respectively ZFC, and assigning truth value to statements by definition), or, in the case of Gödel's theorem, that we pass through the 5 stages of grief until we reach acceptance. This is natural: Mathematics is the language of the sensical abstraction, so we had the most success admitting only true and false to this language. But has it been attempted to allow values beyond true and false? To build a version of Logic that admits the nonsensical and marks it as such? I'm probably 100 years behind on the discourse here, do any of you know if this has been attempted or trivially exists?
https://en.m.wikipedia.org/wiki/Three-valued_logic ?
Edit: commenter below pointed out some mistakes here. I don't know much about it but i think fuzzy logic is a thing. If i understand correctly, it is kinda a probabilistic approach to logic. I think it is commonly used for logic interfacing with the real world with imprefect information but it's not something I know much about.
Careful here not to confuse probability with truth-value. Fuzzy logics say the truth-value of well-formed formulas can take on any value in [0,1], so they have an infinite number of possible truth-values. To demonstrate that this is different than probability, you can begin to ask how you could construct a probabilistic logic out of a fuzzy logic. For example…what is the probability proposition, p, has the truth value 0.46 is different than saying proposition p has the truth value 0.46.
There are MANY, including… Fuzzy logics, supervaluationist logics, paraconsistent logics, Gödel (G3) Logic (a type of intuitionist logic), Belnap-Dunn logic, and so on… Kleene and Lukasiewicz tables are two different versions of truth-tables for three-valued logics the Belnap-Dunn lattice is used in many four-valued logics. Many times these are referred to as “non-classical” logics. There are also many non-classical set theories. From these you can also produce non-classical arithmetics. You can also construct different forms of probabilistic logics with these (note, as a fun exercise, how different logics affect the classical Kolmogorov axioms; one obvious example is that the Compliment Rule won’t hold in Paraconsistent probabilistic logics since truth and falsity are not exclusive in those logics). To respond more specifically to the problems you raise in your post, let’s consider paraconsistent logics, set theories, and arithmetics. Para. Set Theories allow you to have a set of all sets. Since they admit paradoxes, these set theories can simply “accept” Russell’s paradox as true—i.e. that you can have a set that simultaneously is and is not a member of itself. Paraconsistent arithmetics also get very interesting when it comes to Gödel’s incompleteness theorems. If you get in the weeds of Gödel’s theorems, you’ll remember one thing he notes is that his theorems only apply to axiomatic systems of a certain nature. As it happens, non-trivial paraconsistent arithmetics are both sound and (para)complete. Indeed, it is no problem in a paraconsistent system to simply accept paradoxes akin to the liar paradox (also why Tarski’s undefinability theorem is not a problem for paraconsistent logics per se. To avoid trivialism, the details get a bit tricky, but ultimately you can more or less define a paraconsistent logic’s semantics the logic itself.)
Not a silly question at all! In fact, I think that’s a rather profound question. I’m unfortunately not immensely well-versed in computational theory, but as I understand it, quantum computing seems to be an example of what you’re asking. E.g.: https://academic.oup.com/logcom/article-abstract/20/2/573/935641?redirectedFrom=fulltext
[ "How to teach a measure theory class for advanced undergrads?" ]
[ "math" ]
[ "yti67n" ]
[ 22 ]
[ "" ]
[ true ]
[ false ]
[ 0.89 ]
Hi all, I’m just asking for advice/strategies that may help me teach an introductory measure theory for advanced undergrads next semester. I’m leaning towards using Axler’s book since it’s very friendly with lots of explanations that lead up to big ideas (and lots of interesting trivia along the way). It is my first time teaching such advanced course so I really want to make it enjoyable as much as I can. I benefited most from dedicating profs that cared about their students during my education so I just want to give that back to my students. Basically, any kind of advice/strategy will be welcomed, no matter if it is math related or pedagogical. For examples, what are some of the cool/interesting ways to give exams? I don’t want closed-book exams since they were even stressful to me and I feel like they shouldn’t be forced to remember something they can easily google in seconds given that how math is done nowadays. How to teach an advanced course like this given that the students will just assume the class is just another definition-proof class? Suggestions for measure theory notes/cool problems are also greatly appreciated as well. I look forward to hearing all the opinions. Thank you so much!
If you decide to use my book as your textbook, please contact me at [ axler@sfsu.edu ](mailto: axler@sfsu.edu ) so that I can send you the Instructor's Solutions Manual (available only to faculty who are using the book as the official textbook for a course).
Meaure theory was my weakest class, so from the viewpoint that it was super difficult, I think there are a few ways I would go about it if I ever teach it.
I actually quite liked Stein and Shakarchi's book on integration theory, which focuses first on the Lebesgue measure before anything else. I think it's much easier to motivate other measures once you establish the Lebesgue integral as a generalization of the Riemann integral. General comments on L^p spaces and basics of functional analysis might also be fun things to think about including.
Check Tao's book. It is a pretty simple exposition. The problem is however having good knowledge of Topology. There is also this one other nice book by Indian author I read called "Measure and Integration by M Thamban Nair". The reason I like this book is that the main plot is more or less fleshed out. The only bumps one may encounter is if they have some conceptual difficulty at certain point but I don't think it will happen because the book is that clear.
Axlers book on measure theory may be good for a gentle introduction. If I'm not mistaken, he introduces it similarly to how Riemann integrals are defined via a sort of upper and lower sum.
[ "Can you imagine walking on the real line?" ]
[ "math" ]
[ "doakp3" ]
[ 0 ]
[ "Removed - incorrect information" ]
[ true ]
[ false ]
[ 0.25 ]
null
just imagine walking on an n-dimensional hyperplane and set n=1
I can’t fucking imagine it
Project yourself into 2d
How would I walk on some 1D object wirh 3D feet?
No, my feet are too big.
[ "Can anyone prove or disprove this?" ]
[ "math" ]
[ "do8b1d" ]
[ 1 ]
[ "Removed - try /r/learnmath" ]
[ true ]
[ false ]
[ 1 ]
null
Yes I can. But looks like homework.
https://en.wikipedia.org/wiki/Conway_base_13_function
f\left(x\right) 😠
For fucks sake, why can't you just use letters
its an optional challenge problem, I don’t want an answer Im asking for some resources
[ "What would be the formula for calculating the percentage my gpa went up?" ]
[ "math" ]
[ "do8zw3" ]
[ 0 ]
[ "Removed - try /r/learnmath" ]
[ true ]
[ false ]
[ 0.33 ]
null
I wouldn't even mention GPA in an interview unless they ask for transcripts.
It comes up a lot.
New/original - 1
Times 100
Would this change (the -1) if I wanted to use my first semester gpa and my last semester gpa (two years later)
[ "Mathematician who solved prime-number riddle claims new breakthrough" ]
[ "math" ]
[ "ytc4pz" ]
[ 931 ]
[ "" ]
[ true ]
[ false ]
[ 0.98 ]
null
This will be incredible if it passes review. What an odd character, having a mathematical breakthrough at 60 and again a decade later.
If true, big. Zhang has had a rough go. If he could get a second victory that would be truly epic for someone who ended up having to work at Subway to pay the bills.
I think it's kind of insulting to call his work on twin primes a "riddle".
It's weird that this article thinks there might be people who read past the headline who do not already know the definition of "prime number".
I think actual progress is slow an unrecognized. I'm a linguist and a lot of progress in the last two decades have not come through Chomsky but from reconciling computation and pure linguistics. A normal person won't know this; pure linguistic break throughs are even more unknkown, a lot of linguists won't even know unless they're in a sub field. But all of this to say, to your point: I suspected most people simply dont achieve much at all I very much disagree. I think progress is typically understated and very "small", or at least misunderstood. Trailblazers may make many major breakthroughs that receive media recognition, but that doesn't diminish people's contributions to a field overall. The canon of knowledge benefits from communal work, not just individual brilliance.
[ "How to '-2' from 4 using square root?" ]
[ "math" ]
[ "do5gq2" ]
[ 1 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 0.54 ]
null
Sqrt(4) is 2 because we define it that way in order to make the square root a function. However: x = 4 has two solutions. sqrt(4) = 2 and -sqrt(4) = -2.
We just define the square root to be the positive one. But you're right, the solutions to ^2 = 4 are 2 -2, i.e., plus or minus sqrt(4).
A function is to only take on one value for each input, so in order to speak of the square root function we must pick only one value for each input. That's a rationale.
I find this explanation convincing. Thanks!
Square root only ever returns a single value. Obviously the motivation for the square roots existence is to invert the squaring function. However the inverse of x is a multi-valued function, i.e. there are two possible solutions to x =4. Functions are defined to only ever have one output so in order for the inversion of x to be a function we need to make a choice as to which part of the inversion we take to be the square root. The canonical choice is that sqrt() is the positive branch of the inverted function, and -sqrt() is the negative branch of the inverted function. This may also address the question of /u/theeliteguy from above. For some concrete examples... sqrt(x) = 2 has x=4 as a solution. sqrt(4) = x only has x=2 as a solution x = 4 has both x=-2 and x=2 as solutions.
[ "Geometric meaning of a rotational matrix's determinant" ]
[ "math" ]
[ "do6idv" ]
[ 5 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 0.69 ]
null
Think of the determinant as a scalar which scales areas/volumes after a transformation. A rotated object maintains the same area, so the scalar must be 1.
Orientation is like the "handedness" of your coordinate system. Your normal cartesian basis is "right handed" However, if you reverse the direction of one of the basis vectors or if you swap the basis vectors, your basis will be left handed. You can see that the basis with one of the coordinates reversed can be rotated into the basis with the basis vectors swapped, but not the original basis. These transformations have determinant -1 and they turn your plane into a mirror image of itself. The same holds true for transformations over R but its a bit different. Reversing one of the coordinate axis or swapping one with another still turns your R into a mirror version of it, but if you do that twice, your R will turn back into some rotated version of it. Applying two transformations with determinants -1 gives you a transformation with determinant 1. You can see that all right handed bases can be rotated into one another and that all the left handed ones can be as well, but they can't be rotated into one another. This all has some important meaning in math. All the rotations and rotoreflexions are called orthogonal transformation. They are special, because they preserve dot products between any two vectors. However, because as we have seen before reflexions change the handedness of our space, reflexions dont preserve the cross product between different vectors, they will multiply it by -1. This has a lot of importance for physics for reasons i dont want to get into.
The determinant of a matrix is how much volumes are affected under the linear map. |Determinant| 1 means the volume element is preserved.
To add on to what the others said. Orientation can also be left- or right-handedness. So a transformation with determinant -1 preserves volume but changes orientation. You can't change a volumes handedness by just rotating, so rotations have determinant 1.
Orientation isn't synonymous with rotation. Orientation just means the image of the the standard basis vectors are "oriented" the same way as the standard basis vectors (I can explain more on this if you're interested). So for example, in R it would mean that if you take the images of (1,0,0), (0,1,0) and (0,0,1), they would still satisfy right hand rule. For an example, consider the map in R sending (1,0) to (1,1) and (0,1) to (0,1). This map is orientation preserving as (1,1) and (0,1) are oriented the same way as (1,0) and (0,1) ((1,1) is "clockwise" from (0,1)). It is also volume preserving, since the parallelogram spanned by (1,1) and (0,1) has area 1 (i.e. the determinant of the matrix is 1). However, it's not a rotation, because aside from being volume and orientation preserving, rotations should also preserve lengths and angles! In this case, clearly it shrinks the angle between (1,0) and (0,1) from pi/2 to pi/4.
[ "Intruiging challenge problem" ]
[ "math" ]
[ "do4nxv" ]
[ 42 ]
[ "Removed - try /r/learnmath" ]
[ true ]
[ false ]
[ 0.95 ]
null
I think I've come far enough on this to conclude it's not going to be beautiful... $$ r_n = r\prod_{k=3}^{n+1}\sec(\pi/k) = r2^{n-3?}\sum_{s_k=\pm 1}\sec(\sum_k s_k\pi/k) $$ Noooope Edit: math formatting? Edit: trig derp
edited original post, that might give you a good start
So, I don't have my notes here, but that's about how far I got (except I have the product of \cos's instead of \sec, but that could be a matter of definitions). What I'm saying is I don't know any good maths to get from the combinatorially growing sum of cosines of sums of angles to anything nicely analytical.
No shirt, of course it's sec. I just drew too badly. But it comes out the same except now the sums are in the denominator which makes me feel like doing it less.
Unfortunately, your submission has been removed for the following reason(s): /r/learnmath /r/homeworkhelp /r/cheatatmathhomework /r/math If you have any questions, please feel free to message the mods . Thank you!
[ "[General] A Swedish graduate school requires \"mathematical analysis\" as a pre-req for engineering program. Is there a culture difference? I thought only math majors took that." ]
[ "math" ]
[ "do31y8" ]
[ 1 ]
[ "" ]
[ true ]
[ false ]
[ 0.66 ]
null
It is probably a mistranslation. “Analys” in Swedish is calculus. You should be fine with that background
The courses in the beginning of an engineering or mathematics program in Sweden dealing with differentiation, integration, Taylor series etc. are usually called something with "Analys". Check Chalmers student portal ( https://student.portal.chalmers.se/en/chalmersstudies/courseinformation/Pages/SearchCourse.aspx ). And search for "Matematisk Analys" in the Swedish course name field and note that the english course names are something with calculus.
Outstanding. I found a chalmers math analysis course and went through the syllabus, definitely done that stuff before. Thank you very much!
No need to worry, you're well prepared. Mathematical Analysis is essentially equivalent to Calculus, with a bit more focus on the theoretical aspects. It's taught in undergrad programs and is common in post-soviet countries. A bit surprised to see it in the Swedish curriculum.
Damn! There is a Complex Adaptive Systems course at Chalmers?!? What the heck am I doing in Germany?! That said. While it is true that people in Europe generally start with a better level in math/physics after highschool than people from the US, by the time they have a BSc, that advantage is gone (usually). If you have a degree in engineering or math, that's should be enough.
[ "Probabilities on Countable Sets" ]
[ "math" ]
[ "do2giy" ]
[ 0 ]
[ "Removed - incorrect information" ]
[ true ]
[ false ]
[ 0.25 ]
null
Your idea is a decent one. It won't work as you've stated it since cardinal arithmetic simply doesn't work like that at all. But there is a context in which it does kind of work which has been formalized and investigated. The notion is Loeb measures in nonstandard analysis. It's quite finicky however for your specific use and just doesn't reveal anything new or interesting. Consider a countably saturated extension of the universe and the hyperfinite set N' such that [N'] = {{0, ... , n} : n\in N}. We have that N\subseteq N'\subset N* and that's kind of the best we can do internally. You can now define the measure m : P_I(N') -> [0,1], where P_I(N') is the internal algebra of subsets of N', by m(A) = sh(|A|/|N'|) where |B| denotes the internal cardinality of the hyperfinite set B. To get the measure of some subset C of N you can then consider the measure of the hyperfinite set C' such that [C'] = {C\cap {0, ... , n} : n\in N} (we need to build C' in the same way we did N' as C* just isn't hyperfinite). Then m(C') would be the ultralimit of |C\cap {0, ..., n}|/n which is well-defined. You've probably noticed some problems with the thing I've defined above. Where things went haywire is that P_I(N') is not a sigma-algebra. Suppose that (A_n : n\in N) is a sequence of internal subsets of N', then if \bigcup_n A_n is an internal subset of N' by countable saturation we have that A is actually a finite union of some initial segment of elements of (A_n : n\in N). But because of the thing I noted before, the union of the disjoint collection (A_n : n\in N) lies in P_I(N'), then m(\bigcup_n A_n) = \sum_n m(A_n) as it reduces to finite-additivity. So it is technically countably additive. Overall it's not very surprising this is what you get, although it's a very cute way to formalize it. Some ideas turn our to be very robust and no matter how you attack them you tend to get back to the same thing. This is one of them. Edit : I did totally ignore most of your post. That's because, even at my most charitable, your precise idea just doesn't really work at all. This is in large part to you expecting operations on infinity to simultaneously work like the reals and like cardinals. Edit 2 : You also assumed the continuum hypothesis. Which is not really a "normal assumption".
Edit 2 : You also assumed the continuum hypothesis. Which is not really a "normal assumption". The OP mentions 2 = ℵ_1, but I think they only mean to use the fact that 2 ≠ ℵ_0, which is true in ZF.
The continuum function is not surjective and there's no reason for it to be injective. There is no way to define log(\aleph_\delta) which is consistent with cardinal arithmetic. For instance, suppose that 2^\aleph_0 = 2^\aleph_1 = \aleph_2. Then what is log(\aleph_2) and its relation to the continuum function? I assume that 2aleph_0 != aleph_0, which is certainly a standard assumption in set theory. This is distinct from the continuum hypothesis, which states that 2aleph_0 = aleph_1. That's different from what you stated in the post. But sure, I'll accept it. It's not an assumption though as it's proved. Having published on this subject in a refereed journal while still in undergraduate, I can assure you, none of this is wrong. I don't believe you at all lol. Your mathematical maturity is pretty low if you're trying to reinvent the wheel. Give me a doi.
Researchgate is not peer reviewed. You can put whatever rubbish you want on there. What I've said is not gibberish. In fact it's the bottom line for what one should know if they want to talk about set theory. Even assuming GCH (which is not a "normal assumption") alephs are not indexed by naturals but by ordinals. So the definition does not exist for every limit cardinal. More importantly 2 is meaningful as it's the cardinality of the powerset of X. The cardinality of a powerset can either by finite or uncountable, this is easily provable. So the existence of "log(\aleph_0)" for which the desired properties hold leads to a contradiction. If you change the definition of the continuum function to circumvent this, then what you've defined is simply some symbol that no one cares about in relation to some function that no one cares about that you can't do anything with. You can keep telling me what I'm talking about is gibberish and bemoan the fact that you're a misunderstood genius. Or perhaps you could instead realize the reason nobody thinks your ideas are very good isn't because those people are stupid but rather your education is incomplete. Nobody is going to stop you from pursuing these ideas forever if you want to. If that's what you enjoy then by all means continue. But just know that if you continue on this road, no one will ever care about a single thing you do in mathematics.
Why are you being so combative about this? There are multiple people in this sub who do set theory for a living that I am certain, and I’m sure u/Obyeag is as well, would tell you the same things you’ve already been told. Is it that hard to just take some constructive feedback and move on? Logarithms aren’t well defined for all cardinals and not all of your ideas are good ones. Just like everyone else’s.
[ "Classes Prep" ]
[ "math" ]
[ "do2qa4" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.4 ]
null
Watch calculus videos by khan academy and the series essence of calculus by 3blue1brown on youtube. That will help you recapitulate on some things and doesnt take as much time as reading a calculus book by yourself.
I wrote this comment in the assumption that he has already read and gone through the tough parts in calculus beforehand. I totally agree with you, but my idea is to recapitulate the subject, not learn it. If he has forgotten everything and needs to learn it again, it really is best to do it with a book, even though it is more hard-work
Ah alright, in that case I'll agree. Books are usually way more tedious to scan through than videos.
This is very dangerous advice. It is actually something I have done myself several times, and often to no benefit. When it comes to videos you usually watch the whole thing in hope it might at some point click, and it is hard to work backwards(eg if they refer to some theorem). In a book you usually have most of the material, including what it is built upon, directly at hand. So the intuitive parts can usually be read over quickly, while difficult parts tend to refer to concrete material in the same book elsewhere - which you can then read first. Especially 3blue1brown is a big offender in this for me. I really like the way he visualizes everything but if I had to learn math by his videos alone I'd take me a hundred times longer than reading from a book.
r/learnmath
[ "What does it mean to compare functions \"asymptotically\"?" ]
[ "math" ]
[ "dny9j6" ]
[ 0 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 0.5 ]
null
Here, you can think of it as analyzing the functions as they "approach infinity". Think of the functions f(n) := n and g(n) := n + 1. At no point are these two functions equal, but we can ask how fast these functions grow relative to one another. And, of course, these functions grow identically, so "asymptotically" they have the same behavior.
Ok but if we're working with discrete functions (i.e. non-differentiable) how do we have a notion of "growing"
As long as you have some ordered set you can define monotone functions, which have a notion of growing. Consider (ℕ, ≤), a discrete space with an order. You don't need differentiability to define an increasing function, nor to define behavior of functions. There are a lot of non-differentiable functions that are of interest (unless your a physicist).
! remindme 2 days
KZReminderBot Stats Hi, 🤗! Your reminder is in on : ! to also be reminded and to reduce spam. Comment #1. Thread has 1 total reminder and 1 out of 4 maximum confirmation comments. Additional confirmations are sent by PM. Bot Information | | | Give Feedback
[ "Are you able to make a 50-50 probability game out of a random number of 8 digits?" ]
[ "math" ]
[ "do0go2" ]
[ 2 ]
[ "Removed - see sidebar" ]
[ true ]
[ false ]
[ 1 ]
null
The obvious one (if the digits are uniformly randomly distributed) is guessing the sum-parity of (any nonempty subset of) the digits.
Oh man! That is not obvious to me. Care to explain with a calculation?
I'll hazard a guess. 00000000 to 99999999 is 100,000,000 numbers. Simply express the serial as a number, subtract 50,000,000, and try to guess whether the difference is positive or negative?
No -- sum-parity is simply whether the sum is even or odd. See other comment.
Well, that's a lot easier.
[ "Infinite Prefix-Free Languages" ]
[ "math" ]
[ "dnw04x" ]
[ 0 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 0.17 ]
null
What about 1 0?
Clever, thank you. This will do.
Unfortunately, your submission has been removed for the following reason(s): /r/learnmath books free online resources Here If you have any questions, please feel free to message the mods . Thank you!
Have you tried coming up with one yourself?
Too lazy.
[ "Making a presentation about Emmy Noether and I need some of your help" ]
[ "math" ]
[ "dnu84j" ]
[ 0 ]
[ "Removed - try /r/learnmath" ]
[ true ]
[ false ]
[ 0.45 ]
null
Most of her work was in pure mathematics. She only briefly worked on physics problems, but the impact of Noether's theorem about conserved quantities was very large. If you've had abstract algebra, the notion of "abstract algebra" as a field is largely due to her. Maybe van der Waerden's old textbook (which is the original abstract algebra textbook) will have more. Famous theorems she proved are the Noether isomorphism theorems, Lasker-Noether, Noether normalization in algebraic geometry, and Brauer-Hasse-Noether.
Thank you for suggesting her theorems! I'll look it up.
When is this due? What are requirements? Sorry but the teacher in me has got to suggest: Wikipedia would NOT be a source I'd accept. Videos are NOT a source I would accept. These would be good starts but find biographies of hers; articles about her; so if primary sources are not readily available then secondary (reliable ones) that when the instructor sees your sources, they will have confidence about your research.... 1st comments are a really good start.... I'm just suggesting you find better / reliable sources....
Thank you! I looked up her life briefly on Wikipedia because I had no idea who she is. I was gonna look into her biographies and articles too :)
Unfortunately, your submission has been removed for the following reason(s): /r/learnmath /r/homeworkhelp /r/cheatatmathhomework /r/math If you have any questions, please feel free to message the mods . Thank you!
[ "Importance of math in our lives" ]
[ "math" ]
[ "dnsf8s" ]
[ 0 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 0.33 ]
null
Its like art, except you are guaranteed a wellpaid job
with a massive caveat of pursuing anything but pure mathematics, in which case the "art" argument gets weakened somewhat. (it's more like regular ol' art then!) i've made peace with it, i like art! someone's gotta do it.
I don't know how well that holds in this economy. Sooooo many math jobs require an advanced degree.
Unfortunately, your submission has been removed for the following reason(s): /r/learnmath books free online resources Here If you have any questions, please feel free to message the mods . Thank you!
Maths is awfully broad. Pure maths is probably more for art and beauty of it, and increasing the broad and depth of human knowledge and abstract thinking. Applied maths, on the other hand, is insanely useful in countless aspects of modern society, be it physics, chemistry, engineering, economics, medicine, biology, etc. It all depends on what you wish to specialize, but all require great mathematical models and ideas. I’ve heard once of the estimate of how much of a country’s GDP is due to math, and I’ve heard that for most developed countries it’s consistently in the range 15%-20%. Needless to say, being a mathematician pays well and is well recieved in the market.
[ "Engineering physics degree problems.Need advice" ]
[ "math" ]
[ "dnsj7k" ]
[ 5 ]
[ "" ]
[ true ]
[ false ]
[ 0.64 ]
null
The more you practice the better you'll get at math.
Hi, so I don't know how helpful my comment is going to be because ive been in the opposite situation, I can't calculate anything but I'm pretty okay at math in the ideas section. What you need to do is make up your mind. What it sounds like is at one point you did make up your mind but didn't think it through enough. And the problem is you'll get nowhere if your always second guessing yourself. So take some time away from Reddit and think had about the pros and cons and go with your gut. And if you decide to keep Pershing, great, I have some help for you. What you want to do is not focus on the numbers of hours you study and how hard your grinding. When you get to this stage of math or physics it's pure understanding. And looking at a paper for hours gets you no where. What you need to do is start writing something, test what you think you know and see if it's right if not look back where you might have gone wrong, just make sure you keep writing, it really helps. And another great thing you can do is get a good tutor, not one who's there to help you study, but someone who deeply understands the topic and can tell you where your understanding is wrong. Hope I could give you some help, sorry for the long comment.
When I started my maths degree a lot of people in tutorials, seminars etc. Would get the answers well in advance to what I would, and there were some remarks made about that due to the competitive nature of mathematics and my institution. In assessments however I continue to out perform the vast majority of my cohort. Everyone starts slow, the trick is to outwork your peers without trying to show off or complete work “as fast as possible”. As long as you don’t have an issue with meeting deadlines and getting through exams in time, you’re fine. It’s about putting the hours in, I used to hear comments all the time from people responding to others results saying “you’re just naturally good at maths” or “how did you get that without doing that much work”. It’s all lies & exaggerated comments from people so that others perceive them as “clever”, “gifted” etc, screw that. Yes some people can pick ideas up quicker, and a very minute number of people at the top of academia are gifted. But at the end of the day it all comes down to the hours of effective work you decide to put in. Accept where you’re now, figure out where you want to be and put in the hours to get there. No one worth impressing cares about who gives the fastest answer, and you should only care about if yours is the right one.
I like to bring up Kevin Durant's famous quote whenever I hear people who struggle with this concept: 'Hard work beats talent when talent fails to work hard.'
I did engineering at university. I did a foundation year before starting the proper course. That year was spent getting everybody up to speed with physics, maths, basic engineering principles etc. Maybe your grades and abilities are good enough already and you don’t realise it. I don’t know, speak with the uni, see what they say. At first I felt like I’d been put back a year and maybe I had technically but the foundation course really, really helped
[ "a^2 + b^2 = 2019^2 Find A + B. From AMATYC 2019." ]
[ "math" ]
[ "dnplau" ]
[ 14 ]
[ "Removed - try /r/learnmath" ]
[ true ]
[ false ]
[ 0.82 ]
null
Assuming you cannot use a calculator this cannot solve by brute force alone, since assuming a>b>0 we still have a wide range of possibilities, i.e. 0<b<a<2019. This is how I would do it. (a,b,2019) is a Pythagorean triple. Now, everybody knows or should know that every Pythagorean triple can be written as (2 t, s -t , s +t ) where s and t are relatively prime and of opposite parity (i.e., s+t is odd). Here means that the sides of the triangle have no common factors. So, if our triple (a,b,2019) is primitive, then there should be two numbers s>t>0 such that t +s = 2019 (note we reduced to 2019 from 2019 !). On the other side, if the triple is not primitive, there there must a divisor d of 2019 such that t +s =d. It is immediate to see that 2019 is divisible by 3 since 2+1+0+9 is divisible by 3. Indeed 2019 = 3 * 673. Now, s + t = 3 has no solutions, but we can try with 673. Since floo(sqrt(673)) = 25 we have much less possibilities. If you have played around with numbers for a few years you probably know by heart the squares of the first 25 numbers. It will took no time to find that indeed 12 + 23 = 673. So, our primitive triple is (2 * 12 * 23, 23 - 12 , 23 + 12 = (552, 385, 673) and to get our solution we just have to multiply by 3 : (1656, 1155, 2019), so we conclude that at least we have the solution 1656 + 1155 = 2019 Then, if needed we should check if there is a primitive triangle with hypothenuse 2019, i.e., if s +t = 2019, but this is impossible, because 2019 = 3 * 673 and there is a well known theorem that says that a number cannot be the sum of two squares if its prime factorization contains a prime equal to 3 mod 4 raised to an odd power (in this case 3 itself).
Bruh. YOu're a genius. This is amazing. I'm still on precalc so I don't fully understand it but wow. :) The math club meet every friday. :D This is amazing.
You don’t need more advanced techniques to understand it; just some new theorems. Precalc is more than enough.
I don't know what precalc is, but if you have some choices for A+B it is easier. Bear in mind that this kind of problems become more easier with practice. There are a lot of "tricks" or better "common approaches" that recur. I'm not an expert because I hate competitions, but modular arithmetics often helps. 2019 is divisible by 9 because 2019 is divisible by 3. Now, two squares can sum to a multiple of 9 only if both are multiple of 9. This can be seen taking {0,1,2,3,4,5,6,7,8} = {0, 1, 4, 0, 7, 7, 0, 4, 1} mod 9. The only two numbers among 0,1,4,7 that added are a multiple of 9 are 0+0, so the two squares must be a multiple of 9, so A and B must be a multiple of 3, and so also A+B must be a multiple of 3, so this excludes all the possibilties apart (a) A+B=2319 and (e) A+B = 2811. Now, I don't know your math level, but here the most immediate thing is to try to solve the two system A + B = 2019 A+B = 2319 and A + B = 2019 A+B = 2811 and see which system gives you a solution. This is not very hard to do. Take the second system. Even without taking any shortcut: you got A = 2811-B. So putting that in the first equation you got (2811-B) + B = 2019 This is a quadratic equation that can be solved routinely in no time if you have at least a pocket calculator.
This is one of 20 questions. YOu have one hour to answer them all so about 3 minutes per question. xD
[ "Interpretation of functions on a y-axis: log, x-axis: linear plot" ]
[ "math" ]
[ "dnjw1w" ]
[ 1 ]
[ "Removed - see sidebar Removed - incorrect information" ]
[ true ]
[ false ]
[ 0.57 ]
null
exp(1/x)
Doesn't matter. The question was "what kind of function", obviously you'd need to choose your coefficients right to get that exact one.
You can always shift things to your liking. But, to answer your question, yes it is.
This plot contains (0,1). Isn't exp(1/x) asymptotic to the y-axis?
10
[ "In light of a recent thread, when asked what the square root of 4 is, would you ask \"which one?\" or would you not consider -2 to be a square root of 4? Wikipedia says that 4 has two square roots." ]
[ "math" ]
[ "do62ue" ]
[ 2 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
[deleted]
We are human and can read context to respond appropriately. This goes for most middle-school level terminology questions posted here. It's a shame and travesty that many maths teachers use this sort of thing as a gotcha and dock marks, but in real life nobody cares as long as you're clear about what you mean. But for what it's worth, the convention is that -2 and 2 are square roots of 4, while sqrt(4) is 2.
-2 and 2 are square roots of 4. Its not hard
-2 squared equals what? Since square root is the number -2 is also square root. Its not hard
They both satisfy x^2 = 4, but whether or not they're both square roots might be contentious
Square root is a function. One number can't have 2 square roots. It's not hard
[ "I believe I have made a conjecture!" ]
[ "math" ]
[ "do0nxs" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.5 ]
[deleted]
You can prove this using ring theory, and could probably show the ideal generated by those numbers is equal to ℤ. Simply enough, you can show that (2,3) is ℤ by showing that 1 is in it and 1 is a generator of ℤ.
I'm assuming you're referring to adding these numbers. Well, in fact any number greater than 1 can be made from adding just 2 and 3: if it's even, it's by definition multiple of two. Otherwise, that number minus 3 is even so you can add as many 2s as that and then a 3 :)
what does "can be made from" even mean?
You could just shorten that list to 2 and 3. It's been forever since I've done proofs, but I'm certain you could prove it
From the sidebar: If you're asking for help learning/understanding something mathematical, post in the Simple Questions thread or /r/learnmath . This includes reference requests - also see our lists of recommended books and free online resources. Here is a more recent thread with book recommendations. If you are asking for a calculation to be made, please post to /r/askmath or /r/learnmath .
[ "Do you think transcribing lecture notes into latex is a good studying method?" ]
[ "math" ]
[ "do4e0i" ]
[ 2 ]
[ "" ]
[ true ]
[ false ]
[ 0.75 ]
[deleted]
It is very time consuming, especially of you have a lot of equations to copy, and it's something you can easily do with pen and paper. I personally just write everything the professor says, even if I dont fully understand it and even if my notes get very messy. Then I recopy them (on paper) more neatly and, by doing so, I look if what I'm writing actually makes sense. This also solves the "disorder" problems (i.e. if your professor writes a theorem, then a lemma and then the proof of said theorem, you can "mark" the lemma and, when recopying, transcribe it before the theorem if you prefer so)
I agree here. Pen and paper is usually the better tool, not just because of ease of use, but also helps learning better than typing on a computer (<insert link to scientific studies here>). That said, I had friends who did all their notes in LaTeX and they were quite successful at it. And I know people who then put all their notes together and created a book (~300 pages or so) with all the knowledge you need to pass the first two years and sold that.
Yep, I also thought about rewriting everything with LaTeX and sell it (after I passed the exam), but never had much time to do so.
It's basically OCD. I would rather write it manually (thereby also practicing writing symbols), they are good enough (like 95% as good for a fraction of the effort).
Copying notes is not a good study method for math. You should make a summary sheet instead and spend 90% of your time doing practice problems
[ "Stereographic Bliss" ]
[ "math" ]
[ "do9a77" ]
[ 239 ]
[ "Removed - low effort image/video post" ]
[ true ]
[ false ]
[ 0.99 ]
null
Painting? That looks photorealistic. Either way, it's beautiful.
The stereographic projection is one of many ways of representing the surface of a sphere on a plane. I’m not sure how this sphere was produced, but using a stereographic projection is absolutely a way of doing it. It has lots of nice properties, the most important of which is that it is “conformal” — meaning that it preserves the angles at which curves meet. I’m not sure of the etymology.
Can someone explain why this is "stereo"graphic?
I titled it because the surface of the sphere is being (approximately) stereographically projected onto your retina! If the original image was printed on a plane, then it was stereographically projected onto the sphere when it was painted. Therefore, this video allows you to experience conjugation by stereographic mapping.
From wiktionary: - Forming words relating to the solid, the three-dimensional. stereophonic. Specifically, forming words relating to the binocular contribution to three-dimensional vision. The word was given to this kind of projection in 1613, but it was called a before that (to refer to the mapping from a sphere to a plane). Specifically for as to why it's , we may need to use a bit of imagination or someone else's linguistic expertise.
[ "Modern high school math should be about data science — not Algebra 2" ]
[ "math" ]
[ "do930p" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.5 ]
null
"We surveyed 900 “Freakonomics” podcast listeners — a pretty nerdy group, we must admit — and discovered that less than 12% used any algebra, trigonometry or calculus in their daily lives." Who cares? What a bogus BS argument for not teaching algebra, geometry, and trigonometry. How many of these people have visited Denmark? A tiny percentage I am sure. Well then, no point in reading Hamlet.
And how do you plan on teaching people calculus if they can't use basic algebra?
I mean, maybe a focus towards data would be good for some people. But I feel like a lot of the curriculum is necessary for that stuff. The authors cited the example of dividing polynomials, but that is a relatively small part of algebra 2. What I did in my algebra 2 class included combinatorics, probability, functions and their graphes, and even a small section on very) basic linear programming. I guess maybe a lot of what can be removed is trigonometry, but other than that, I do not see what else is not useful (for data)
Trigonometry is hugely important in mathematics and all applications. The modeling of periodic phenomena, which are everywhere, is based on the trig functions. Trig functions are basic to understanding complex numbers. Trig functions come up in unexpected places, like robotics.
One of her arguments mention how prevalent is the usage of excel nowadays and therefore we should teach it in high school. Lmao, yeah if the students need assistance in learning MS Excel, then math education isn't their biggest concern.
[ "Deviation of time." ]
[ "math" ]
[ "dnrndq" ]
[ 0 ]
[ "Removed - incorrect information" ]
[ true ]
[ false ]
[ 0.21 ]
null
In mathematics, those terms are defined differently. A rational number, in mathematics, is a ratio of two integers. An irrational number is a real number that is not a ratio of two integers. So, even though we are using the same words, they are not defined the same - it is like how "orange" can refer to either a fruit or a colour. Since that is the case, even if what you are saying is true, it is very hard to make sense of since the way you define these words is specific to you, we do not know what you mean. Imagine if I secretly and silently redefined several words in this post, would you be able to understand me? Would you be confused if I did that? Maybe we can work together, here, and translate what you are saying into common terminology. You would have to be patient and work with the people here, though. Does that sound all right to you?
What's this nonsense?
I’m not sure, but can we keep it up to laugh at a little longer?
4 days old account incredibly wrong and arrogant. Couldn't possibly be a troll.
Rather than getting into peculiar philosophy, let's do a basic reality check and make sure we are using words in the same way as each other. To that end, how do you, specifically, define "rational" and "irrational"? I'm willing to listen to what you have to say, but you have to meet me in the middle and clearly explain yourself, please oblige me, I would like to hear what you have to say.
[ "Turned my research into a fashion statement" ]
[ "math" ]
[ "dnoeek" ]
[ 24 ]
[ "Image Post" ]
[ true ]
[ false ]
[ 0.93 ]
null
Hi r/Math ! We've seen a lot of hats and pins that say "MATH," so why not actually use the beauty of real math on clothes? I think that figure 4.1 from this game-theory paper I co-authored is really cool looking (it's an extensive-form representation of a problem in computer science known as the Byzantine Generals problem – technically, a variant of it involving a permissionless network), so I turned it into a t-shirt. I have the biggest interview of my life in a few days, and I'm going to rock this shirt. :)
Thanks! It's for a program called Y Combinator. Their first question is normally "So, what are you working on?" It will be tempting to just point at my chest and say, "This."
Thanks! It's for a program called Y Combinator. Their first question is normally "So, what are you working on?" It will be tempting to just point at my chest and say, "This."
Cool
I would buy that shirt lol
[ "Can computers prove theorems?: \"And will we soon all be out of a job? Kevin Buzzard worries us all.\"" ]
[ "math" ]
[ "dnqr5d" ]
[ 334 ]
[ "" ]
[ true ]
[ false ]
[ 0.92 ]
null
Computers are excellent at deductive logic, so they 100% can prove when a conclusion necessarily follows from a collection of hypotheses. However, that's a fairly trivial statement, since without some kind of sophisticated software they wouldn't be able to distinguish "interesting" or "useful" theorems from a given set of provable statements. To speculate a bit: I strongly believe that mathematics is an unimaginably huge subject that humans are only just scratching the surface of. Furthermore, I believe that computers will allow us to greatly expand our mathematical horizons, where advanced AIs and human mathematicians will learn to complement each other using our different skill sets. To speculate even more: I believe humans and AIs will always be "interested" in different parts of the vast mathematical landscape with only some degree of overlap, so there will always be human-relevant problems that human ingenuity can help to solve.
Why would computers proving theorems put mathematicians out of a job? A proven theorem with no one to understand it is pretty much useless. Actually discovering or writing out a proof is grunt work. A mathematician's real value is to know what to prove. Why it might be useful, novel, or fundamental. A mathematician might need to create a new notation or mindset to make a proof that would otherwise be unweildly and verbose or nonsensical into a concise, eye-opening truth. Automated proof generators will not have these capabilities in any predictable time-frame. Mathematicians will also be needed to create, update, expand, and improve any automatic proof generator. I guarantee that no such software will start out as "the best it could be" or indeed, ever reach such a state. Could automated proofs change the field of mathematics and alter exactly what skills are needed to be productive? Of course, the same happened with calculators. But only a fool would say "Wolfram Alpha exists, we obviously don't need mathematicians any more."
This is what I like about propositional Vs first order Vs modal/higher order logics. It goes from, we can prove or disprove anything that can be described within this language...to not a fucking chance very quickly.
It's really disheartening to see the same kinds of popular misconceptions about AI show up in a math forum. Yes, machine learning is a powerful tool that will change life as we know it, but no, it has nothing to do with cognition. The field of AI has largely gone away from the Hoffstadter view of 1979. Simulating thought processes was a cool idea, but algorithms designed with that in mind tend to require massive amounts of oversight and assumption and tend to perform poorly even compared to far simpler statistical methods. Modern machine learning takes the total opposite approach: forget about "reasoning" entirely and focus on statistical methods for function approximation with minimum error. Every algorithm used in Big Data and every cool art project using Tensorflow and GPT-2 is the result of rejecting the "thinking machines" model. When people talk about advances in AI, they're talking about statistical tools for problems with millions of variables and massive datasets, where error is measured at every step against a known correct result. By definition, you cannot use a NN to prove something you don't already know. This makes it different from other statistical tools like PCA, which is compatible with scientific inductive reasoning. The people who be worried any their job being automated are people who work with complex empirical problems: engineers, actuaries, marketers, people in finance, QA, security, transportation, agriculture, manufacturing, entertainment, media, consulting, and others who find patterns in complex data but don't have to concretely prove anything. And that's not getting into the economics of whether jobs are a finite resource, and whether the increased productivity will offset the disruption, higher barriers to entry, and possible monopolies (depending on how IP law will be treated in the future).
People often fail to understand difference between aim and tool. Some believe using a sophisticated tool for a task make the outcome very valuable. These people (including me at some point in the past) think that the tricky part is using a sophisticated tool, but what matters, as you put very well, is actually asking the right questions. I think there is virtually no difference between using an abacus and an theorem-proof-software.
[ "What Are You Working On?" ]
[ "math" ]
[ "doaxmj" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.89 ]
This recurring thread will be for general discussion on whatever math-related topics you have been or will be working on over the week/weekend. This can be anything from math-related arts and crafts, what you've been learning in class, books/papers you're reading, to preparing for a conference. All types and levels of mathematics are welcomed!
Getting through MIT's Opencourse on Topology, along with How to Read and Do Proofs by Daniel Solow. Good stuff, would love a study group if any are out there.
I started the Baby Rudin today
Summarising some papers on quadratic forms over semi-local rings. Great theory, but damn summarising papers sucks
I came up with a sort of complex problem while watching a numberphile video on placeable queens
My older laptop died, and I did most of my writing on it so while I've been doing research the past few weeks I haven't actually worked on my paper. Well, I bought a laptop off my friend and just finally got around to installing MiKTeX and TeXworks on it so maybe I can actually get myself to do some writing. I'm dumb though and the research I've been doing is on stuff that won't even really appear in the paper, so I also need to switch gears wrt my research to get the writing going again too.
[ "Examples of cool graphics/visual proofs in published papers?" ]
[ "math" ]
[ "do8z31" ]
[ 15 ]
[ "" ]
[ true ]
[ false ]
[ 0.94 ]
Anyone have any favorites?
Conway & Soifer's shortest paper: https://fermatslibrary.com/s/shortest-paper-ever-published-in-a-serious-math-journal-john-conway-alexander-soifer
Does Tristan Needham's complex number book count?
That one’s almost cheating! They don’t even answer their question! Seriously, check out John Nash. The guy’s really good for that. I think his thesis was less than 30 pages long, and still he defended it great and got his doctorate
I have seen this paper like 10 times already and I still don‘t understand it.
Consider an equilateral triangle of side-length , and imagine you have an unlimited supply of equilateral triangles of side-length 1. You might see that for =2 for example you can perfectly cover the thing with four such triangles in the classic Triforce configuration; for = 3 you can cover the thing with 9 triangles by extending that same pattern with another row of 5. It might take a little effort to prove that the general case is that you need . This number is both necessary (based on area concerns; a unit equilateral triangle has area √(3)/4, the whole triangle has area √(3) n /4, so you need at least n of them) and sufficient (because this construction works just fine to do it in precisely n triangles, hooray). Now suppose I make the triangle just the tiniest bit larger, of size + 𝜀. How many do you need now to cover it? (Keep in mind that you are allowed to go outside the boundaries of the original triangle to cover it; you just can't leave anything uncovered. Also the unit triangles you are using are allowed to overlap.) Well, this thing now has area √(3) /4 + 𝜀, more or less. By area considerations, exactly triangles will not work. But since 𝜀 is assumed tiny, you only need at least one more triangle by area considerations alone. The paper proves that + 2 is sufficient in two different ways. Fig. 1 takes the construction above and mutates it, saying that the last row that you had must have had 2 – 1 triangles, but if you give me two more to make an even longer row, they cover a trapezoid of top , bottom +1, and unit height. The point is that if you let me squeeze this trapezoid a bit, so that it has protrusions which are equilateral triangles of side-length 𝛼, it can cover a trapezoid of top – ( +1) 𝛼, bottom + 1 – 𝛼, and height (1 + 𝛼) √(3)/2. All you need is 𝛼 = 𝜀 and your trapezoid is much wider than it needs to be, but exactly tall enough to cover the extra height of 𝜀 √(3)/2. Fig. 2. squashes the last row of triangles the other way, there are only 2 –1 of them now but we squash them to a height less than 1 in order to make it wider, so that the bottom subtends width + 𝜀 horizontally at the cost of making it too short vertically. Presumably this ripples upwards through the other rows as well, but that is no matter: each row will be short by some amount proportional to 𝜀 but when we finally get to the top we will find that we can overlay two more triangles on our topmost triangle,to make a bigger triangle, up to side-length 3/2. So with only those two more triangles at the top, we can make macroscopic adjustments up top to accommodate a lot of 𝜀-squashing of the rows underneath. The paper asks a question in its title and it is a sincere one: since + 1 is area-optimal, is there construction with + 1 triangles that successfully covers it? Or do we have to use tricks like in Fig. 1 and Fig. 2 that require at least two extra triangles?
[ "Any info on a Power Tower of Rationals?" ]
[ "math" ]
[ "do3uw2" ]
[ 3 ]
[ "" ]
[ true ]
[ false ]
[ 0.71 ]
I've been thinking about this recently but can't seem to come up with an answer. ​ Consider an infinitely tall power tower of positive rational numbers, i.e. (q_1)^(q_2)^(q_3)^... Can all positive real numbers be written as one of these infinite power towers? ​ Here's my approach for writing as such a tower: We want to find a sequence of rationals that, when chain-exponentiated, yields a sequence with limit . ​ First let q_1 be 11/4 since is approximately 2.75. Then calculate that q_2 would have to be about 0.988, so approximate it using a rational with a larger denominator than that of q_1. The general scheme could be to let q_k have a denominator of 2^k, or 2^2^k, or whatever quickly growing sequence which seems to ensure that the approximation improves with each level of the tower. ​ While I may have a valid argument here, I'm really not sure how to go about formally investigating this. Any insight would be really appreciated!
If all the powers can be different, then this does work for every positive real, using the method you described. There are infinitely many sequences which approach a given number. As an example of how to generate such a sequence, you can pick a positive epsilon to be your convergence rate, so that the error at the nth step is like epsilon/n. Say {x_i} is the sequence and x is the limit you want. At each step, you just need to find a rational q_i such that |x - q_i | < epsilon/i. Since there exists an exact real value for the power that will give x, and r is a continuous function of r, and rationals are dense in the reals, you can always find a rational that gets you within this bound (really, you can find infinitely many that do). You could also let the approximation get better at any rate, like epsilon instead, for example, and apply the same argument. Edit: Woops, constructed the tower backwards. You have build the bottom by exponentiating the previous result by by a new rational. The argument and result is the same, though.
I guess that is kind of strange. If you build the power tower in the way you normally would, then the first term gets lost at the top, and the bottom of the tower isn't a fixed value, so analyzing it as a sequence of exponentiations doesn't seem as satisfying as having a fixed tower that you build starting at the bottom. You can still get this kind of method to work building the tower from the bottom, you'll just have to throw in some (iterated) logs to find the value you need to approximate at each step, then use the fact that q_1 ^ ... ^ q_k ^ r is continuous at each step, which is easy to prove by induction.
If the rationals can all be different this is trivially yes. I'd be more interested in knowing about the class of reals that can be represented (exactly) as an infinite tetration of some fixed value. That would be of course if it weren't trivially obvious that the infinite tetration of any rational number must either diverge, converge to 0 or equal 1.
I'd be more interested in knowing about the class of reals that can be represented (exactly) as an infinite tetration of some fixed value. Have you tried working it out? It's not that hard.
That would be of course if it weren't trivially obvious that the infinite tetration of any rational number must either diverge, converge to 0 or equal 1. Huh? This isn't true; for instance, you'll get a convergent result for rational numbers sufficiently close to sqrt(2).
[ "Average students who majored in math at uni, what are you doing now?" ]
[ "math" ]
[ "do6smc" ]
[ 458 ]
[ "" ]
[ true ]
[ false ]
[ 0.97 ]
null
As an average student who is currently majoring in maths, i really appreciate this question dude
PhD student in plasma physics.
Fuck I wish I was literally Euler
3.5! is more than 6, how did you do that?
I was a slightly-above-average student, who got a Master's degree, then lost motivation to be a mathematician in my first year of my PhD program. Mostly because I wanted to put teaching before research but my professors there were pretty bad at teaching and encouraged me to be bad at teaching. Now I teach at a community college. There are *lots* of average math students teaching math in community college.
[ "16/64 problems." ]
[ "math" ]
[ "do8bhm" ]
[ 10 ]
[ "" ]
[ true ]
[ false ]
[ 0.82 ]
When I was learning about fractions in elementary school, my teacher brought up the fraction 16/64 as an example of something to NOT do. He said that you can not cross-cancel the two 6s to reduce it to 1/4. even though 1/4 IS the correct answer. it is not the same as (1×6)/(6×4). I'm frequently reminded of this when I see someone do something the wrong way, but are still successful. Does anyone here have any other interesting 16/64 type examples in math?
Taking the title literally: you can also use this handy cancellation trick on the fraction / = 1/9.
sign errors are a Z/2 action on my homework
One that I find comes up a lot in intro calculus is problems like limit as x ->0 of sin(4x)/sin(7x). Some students get the (correct!) final answer of 4/7 by "cancelling out the sin's and cancelling out the x's".
I think of these as being called "Useless Eustace" problems, after question A2(ii) in STEP II 2010 . Let I(α) = ∫ α (7sin x - 8sin x)dx. [...] Useless Eustace believes that ∫sin x dx = sin x/(n+1). [...] Find all values of α for which he would obtain the correct value of I(α).
I was a physics TA and took a point off because the student made two separate minus sign errors that cancelled each other out. The student tried to argue for that point back because the final answer was correct.
[ "What result in your mathematics training took you the longest to understand?" ]
[ "math" ]
[ "dnkd2h" ]
[ 107 ]
[ "" ]
[ true ]
[ false ]
[ 0.99 ]
Further, what made the result finally "click" for you?
The Great Picard's Theorem. For those not familiar, from Wikipedia : If an analytic function has an essential singularity at a point , then on any punctured neighborhood of , takes on all possible complex values, with at most a single exception, infinitely often. I'd be quite surprised if it's possible to actually intuitively "feel" this result, but whenever someone asks me my favourite theorem this one tends to come up in the conversation, as it's such a strong result.
Bayes' theorem. In the formula P (A|B) = P(A) * P(B|A) / P(B), the meaning of the likelihood P(B|A) was eluding me for years, and reading all the explanations in the world was not helping. I knew how to use the formula in practice, I knew the derivation, etc., and yet intuitive understanding of P(B|A) just was not happening. Then, in grad school, I took a course on applied statistics for physicists. The professor, when explaining Bayes' theorem, said something (I don't remember what now) that suddenly connected the dots in my mind. Bayesian statistics is notoriously difficult to understand intuitively, as it runs contrary to the "natural" frequentist approach that we are used to from our everyday lives. I was not aware of that at the time and thought there was something wrong with me, and only much later, when I shared my concerns with others, I learned that I was not the only one struggling!
Bayesian statistics is notoriously difficult to understand intuitively, as it runs contrary to the "natural" frequentist approach that we are used to from our everyday lives. I find this interesting because I feel like I am a Bayesian in my everyday life--that is, I hold some view about something, and then update it according to information that comes in over time. I have heard other people express the same view. Can you comment further on this point?
I think what you're describing is closest to the law of large numbers, and ignores the fact that the CLT tells you the whole distribution of the limit, and not just that the mean converges.
If you've already internalized the idea that all holomorphic functions are so nice that they have Laurent series, and then consider that essential singularities are those with infinitely many terms on the negative side, it pretty much follows for the same reason as e . The large negative powers of small numbers "wrap around" the origin faster and faster.
[ "I have dyscalculia and can't even qualify for college level math courses; clawing my eyes out and just need resources" ]
[ "math" ]
[ "zmdsoo" ]
[ 2 ]
[ "" ]
[ true ]
[ false ]
[ 0.54 ]
null
Do you have an official diagnosis or documentation of it? You should be able to request accommodations from your university. These should include things like additional time for the exams, being able to use a calculator, access to formula sheets/notes during the exams. You may even be able to take the class as pass/fail so your GPA isn’t affected. But the university will require an official diagnosis to get that process started for you.
You should really change your view of it, it's easier said than done but realize what you saw in your classes is not all that there is to math, it's just an extremely small portion, furthermore it's impossible for math to be totally useless to you, as even if you ended up never using the concepts, the thinking skills it instills in you are invaluable no matter what you end up doing hopefully this provides you with atleast some motivation as it's a shame at the lower levels it's often taught in the most dull way possible and the fun stuff only comes much later
You should really change your view of it, it's easier said than done but realize what you saw in your classes is not all that there is to math, it's just an extremely small portion, furthermore it's impossible for math to be totally useless to you, as even if you ended up never using the concepts, the thinking skills it instills in you are invaluable no matter what you end up doing hopefully this provides you with atleast some motivation as it's a shame at the lower levels it's often taught in the most dull way possible and the fun stuff only comes much later
You should really change your view of it, it's easier said than done but realize what you saw in your classes is not all that there is to math, it's just an extremely small portion, furthermore it's impossible for math to be totally useless to you, as even if you ended up never using the concepts, the thinking skills it instills in you are invaluable no matter what you end up doing hopefully this provides you with atleast some motivation as it's a shame at the lower levels it's often taught in the most dull way possible and the fun stuff only comes much later
where are you at academically? are you in HS? Taking AP classes? in college, and if so what major and subfield? Grad student? That would help us tailor the advice. ​ Also, if math is such a sore point for you, why pursue it? Many majors do not require it and there are plenty of professions that don't either. Is it necessary for something you target? or is it more a matter of interest or desire to conquer a challenge/face a lifelong difficulty? Knowing your motivations would help, too.
[ "Those of you who self-study: how do you manage it?" ]
[ "math" ]
[ "dnzl11" ]
[ 360 ]
[ "" ]
[ true ]
[ false ]
[ 0.99 ]
I work full time (40+ hours) as a software engineer, plus I have other obligations, but I really want to get back into studying math on my own (I graduated a few years ago). Math has always been something that is an important part of my life so it's kind of depressing honestly that I'm not doing it anymore. So for those of you who study math on your own, how do you balance it with your job and other responsibilities? How many hours per week do you spend studying vs working? When do you do your studying (before work, weekends, etc.)? How important is it to you to keep studying math on your own? Is it just a hobby for you or is it your life's mission (or somewhere in between)? If you couldn't do math anymore, how would you feel about that? In sum, how strong is your "drive" to do math on your own? I'm just looking for other people in the same boat and their perspectives. Thank you to anyone who responds.
I'm trying this out to save time: https://ncase.me/remember/
Whiteboards around the house help. When I am learning new subjects I write out problems on whiteboards and visit them when I am running around the house.
I read papers/lecture notes on the train to and from work. Wouldn't say it's very effective without also taking notes or doing problems on your own, though. But I find that it's easy to do once you get into the routine of it, whereas studying once I get home seems much more tiring.
I'm in literally the same boat with you. Full time software engineer and I graduated almost a year ago. I think two things motivate me to self-study: It's something I enjoy It's someting I'm excited to apply to some project (personal or professional) To a degree, 2 is kind of a subset of 1. Anyway, when I come home I just find an hour to set aside and work through a book, ebook, or video to develop the skill in question. Recently it's been this book . And someday's I just wanna play video games and be lazy. So I think a large part of it is understanding that it's a slow process for the most part.
It really is all about time management. I am transitioning from physics to mathematics, so I have my thesis writing work, other research, TA assignments, etc. that together easily take around 50 hours per week. I'm studying math the rest of the time. Every time I do something that doesn't require me to do anything else, such as eating, I immediately open something up and start reading. And, of course, weekends are all about studying 24/7. Math is very demanding, and if you want to get good at it, while working full-time, then you pretty much have to give up on entertainment for the time being. :) Whether it is worth it for you depends on how passionate you are about math and how important it is in your future career. Honestly, if I was not planning to do a career in math, I probably wouldn't study math very seriously beyond what was needed in my primary career. I love math, but it requires a lot of commitment and effort, and I feel that combining a hard full-time job with delving into high-level math books and articles would just wear me out quickly. I would restrict myself to solving interesting puzzles requiring a lot of thinking, but not a lot of specialised knowledge, and maybe reading some math books for fun every now and then. That's why I'm switching to a math career: so I don't have to make that choice! But that's just me. It absolutely is possible to combine a full-time career with doing math as a hobby very seriously; it just isn't for everyone, I feel.
[ "Closure of Q" ]
[ "math" ]
[ "do04e5" ]
[ 40 ]
[ "" ]
[ true ]
[ false ]
[ 0.85 ]
Today in analysis we discussed what the closure of a set is. Slowly realizing that R is the closure of Q reminded me why I like math so much, that feeling of understanding is unparalleled.
That is a really cool fact! What I find really cool is that this is not the only completion of Q. If instead of the absolute value as a metric on Q you use one which says numbers are smaller when they have higher powers of a specific prime, say p, in its prime factorization. This metric is called the p-adic absolute value and the completion of Q with respect to the norm induced by this metric is the p-adic numbers.
The closure of Q under the normal absolute value is R. There are other absolute values called p-adic absolute values. You can take a closure in those absolute values and get different things called p-adics. However, as a theorem, the p-adics, reals (and rationals) are the only things you can get by doing something like this. You can take all of these things together to get the Adeles.
The shindig says 11 comments, but I only see 1. Do we really have 10 shadowbanned people in this thread?
Q is what we call a set: its closure forms the entirety of the space in which it exists (that space being the number line). But the surprising thing is that it can do this while still being which means that you can list the rational numbers one by one (by using a form of pairing function). A space with a countable dense set is called which is a notion of "smallness" in topology; the real numbers are uncountable but are at least small enough to have a countable "skeleton." What may be surprising is the fact that they can do this while still being which means that the number line contains the limit of any sequence whose elements get arbitrarily close together. Such a property usually implies uncountability, making the real numbers "big but not too big." A space that is Cauchy-complete and separable is called a Polish spaces are a sort of generalization of the metric properties of the real line and are used extensively in descriptive set theory.
Nah. Reddit's comments are just borked rn. Nothing mods can do about it. 10/10 platform.
[ "Is There a way to factor a a cubic Function" ]
[ "math" ]
[ "zmbx3h" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.25 ]
null
You can factor any power polynomial as long as it has roots. It’s just a lot are harder than others.
Lol what an understatement
Use synthetic division
People started asking this question centuries ago. Every cubic polynomial can be factorised into ( – )( + + ), where is a real root, and the quadratic may or may not have real roots. This should make sense if you think about the graph of a cubic. The Wikipedia link above gives some methods. A useful start for your specific example is to rearrange to give the ( + 2/3) + 5/3 ( + 2/3) + 70/27 which is (slightly) easier to solve. Wolfram will give you an approximate root . If is rational (which it's not for your example), the rational root theorem gives a finite set of possibilities to try.
Yup.
[ "Sub questions about asking questions" ]
[ "math" ]
[ "zmael5" ]
[ 2 ]
[ "" ]
[ true ]
[ false ]
[ 0.63 ]
null
They would likely need to be posted in the weekly quick questions thread or r/learnmath .
I've found that the maths stackexchange is pretty good as well, but sometimes it can take a while for your question to be answered.
I was going to say this. Also, first time users should definitely read this .
And this .
Thank you, I was not aware of r/learnmath
[ "Measuring the correction factor associated with 2 objects in an image" ]
[ "math" ]
[ "zm0s1x" ]
[ 1 ]
[ "Removed - see sidebar" ]
[ true ]
[ false ]
[ 0.57 ]
null
Account for this while measuring what, exactly?
I’m measuring the diameter of two cylindrical components, along the length of the cylinders, when looking from the top. The cylinders are concentric with one another and component B is within component A, hence the need for X-Ray imaging. Because component B is concentric with component A but has a smaller diameter it is further away from the lens of the camera.
The way light cameras (and our eyes) work, objects look bigger as they get closer to the lens. But way x-ray cameras work is that things look bigger when they get closer to the source of the x-rays. So they look bigger as they move away, which is the reverse of the usual. Divide the apparent diameter by the distance to the x-ray source to get a number proportional to the real diameter. You’ll have to calibrate the proportion by photographing a known object at a known distance.
You can do that. But you need to (1) calibrate your sensor, i.e. get a reasonable estimates of its intrinsics. Which projection model does it use? How do the lines of sights through the pixels travel through the measurement volume? And (2) to compute that correction factor for measuring, for example, the diameter, you need the exact distance of A and B from the sensor's origin. Otherwise you cannot differentiate between, for example, "A is smaller" and "A is a bit further away from the camera" - both can give you the same image.
Based on what I'm reading, you're taking some type of x-ray of objects and measuring diameters of these two. The only way that the readings would be inaccurate between the two is if the angle of the camera shit wasn't correct. The easiest way to fix this is to ensure that the photos are being taken so that the camera view is at a 90 degree angle to the cylinders.
[ "why does probability theory work?" ]
[ "math" ]
[ "zlznjf" ]
[ 0 ]
[ "Removed - ask in Quick Questions thread" ]
[ true ]
[ false ]
[ 0.09 ]
null
Quite vague question, but let me give u a generic answer that might help. Theories and stuff in applied fields works under given considerations, for example, probability of a dice roll 4 is 1/6 given that each face has the same possibility to show up. Try to elaborate why does it shouldn't work.
if it didn't work, then the results in probability wouldn't be true, and if they weren't true, then we wouldn't have been able to prove them all.
Start at measure theory and question why it works.
We don’t really understand your question watch this to know why .
Of course it doesn’t work. Most theories has some assumptions that can never be true. Who has ever had a true fair dice or coin ? None !
[ "What open problem do you personally most want to see solved?" ]
[ "math" ]
[ "zmfq3y" ]
[ 76 ]
[ "" ]
[ true ]
[ false ]
[ 0.91 ]
null
p=np
As someone in theoretical physics, I’d like to see the millennium prize problem “Yang-Mills existence and mass gap” solved. It would mean that quantum field theory and all the funky things physicists do with it would finally have precise mathematical meaning.
What about the mirrored problem qn=q That's what I am personally looking forward to.
Pragmatically, the niche research problem I'm into. Because if it is solved, it might mean that I'm doing good with my research.
That's easy: n=1 or q=0. To be honest, I don't know why mathematicians are finding p=np so hard, it seems equivalent. Maybe I'm missing something.
[ "Do I need more real analysis or am I ready to self learn measure theoretic probability?" ]
[ "math" ]
[ "zmfhq6" ]
[ 10 ]
[ "" ]
[ true ]
[ false ]
[ 0.75 ]
Just finished up my course in undergrad real analysis course at the level of abbot. I got an A. We pretty much covered abbot entirely. Am I ready to tackle billingsley measure theoretic probability? Or do I need a harder real analysis book (baby rudin) before I tackle measure theoretic probability?
this post is probably going to get deleted soon since this is more a r/learnmath question. i would suggest at least reading an introduction to measure theory before trying to do rigorous probability; most probability books will likely assume familiarity to some extent with measure theory. I would suggest MIRA by Sheldon Axler, its a good introduction, fairly short, and available for free online https://measure.axler.net/MIRA.pdf
Billingsley is, in my opinion, not a good book for undergrads to learn measure theory/probability from. It leaves out far too many details from proofs and doesn't do a great job distinguishing technical lemmas that are used exactly once from key theorems you'll use over and over again to be good for someone without too much mathematical maturity to efficiently self-study from. I second using Axler for a first pass in measure theory then coming back to Billingsley afterwards.
Go for it. Billingsley has a gentle on-ramp for measure theory. It emphasizes probability concepts initially. Very nice discussion of constructing an iid sequence of uniform variables.
I agree that Billingsley is not a good first book. With one analysis class, I’d recommend “A First Look at Rigorous Probability.” It is heavily based off of Billingsley but explained in much more detail. Another more suitable introductory book is “A Probability Path”.
I learned lebesgue measure before general measure theory. I liked Roydens book
[ "Recommendations for 'problems' textbooks." ]
[ "math" ]
[ "zmfmqf" ]
[ 49 ]
[ "" ]
[ true ]
[ false ]
[ 0.93 ]
Another mentioned textbooks that are primarily composed of exercises and a commenter responded with by Paul Halmos. I also very much like this type of book but no one really recommended any others in the comments of that post, so I'd like some more recommendations specifically for this type of textbook. Any subject, any level, any size.
Juliusz Brzeziński - Galois Theory through Exercises
In case you're French or speak French the Francini-Gianella-Nicolas collections of exercises of oral entrance exams for Polytechnique-ENS (Cassini editions) are really vast. Around 10 400-page books of exercises all corrected. They're supposed to be undergraduate level.
Allen Clark - Elements of Abstract Algebra VB Alekseev - Abel's Theorem in Problems and Solutions
Peter Komjath - Problems and Theorems in Classical Set Theory.
In case you can read German:
[ "Recommended Differential Equations Self-Study Resources?" ]
[ "math" ]
[ "zmboye" ]
[ 35 ]
[ "" ]
[ true ]
[ false ]
[ 0.9 ]
Just finished taking Calculus III/Multivariable Calculus in Uni. I'm ready to move on to the next math topic, but I'm not going to take another math course next semester, so I'll be free to self-study at my own pace. Since I plan to take Diff Eqs as my next math course, I want to learn it first on my own terms. Any recommended resources? YouTube video series, books, or other online blogs? How would you suggest using these resources?
V.I. Arnold's ODE book. It might not be ideal for accompanying a standard course on differential equations because it doesn't dwell much upon tricks to solve ODEs but focuses more on qualitative properties. It's a good read.
A good portion of the lower DiffEQ curriculumn is based on Linear Ordinary DEQs. These are pretty widely applied to circuit analysis, so you could look up a colleges Signals & System textbook to find real world examples of differential equations. Otherwise you should just look up the textbook your course will use and start reading. It will provide you the most benefit.
Any self study question can concisely be answered with “MIT OCW”
This. This book is incredible. In fact Arnold's books in general are incredible.
Not OP but could you offer a rigorous ODE book? I plant to work through Evans soonish but I’d like to do an ODE book first.
[ "What do you think is the most beautiful result in mathematics?" ]
[ "math" ]
[ "zmbo30" ]
[ 292 ]
[ "" ]
[ true ]
[ false ]
[ 0.94 ]
null
Cantor's diagonal argument, and the whole discovery of different cardinalities/infinities
I find Cauchy's integral theorem of complex analysis to be incredibly elegant. Such a simple statement, yet so powerful.
Noether’s theorem , sad no one else went for my favorite one here.
Less well-known than I feel like it should be, the Erdös-Kac Theorem. Choose your favourite (very) large number N, say a googolplex, and pick an integer n uniformly at random in the interval [N, 2N]. Now look at the number of prime factors of n. Erdös-Kac says: for large N, this number of prime factors will follow a normal distribution, with mean and variance equal to log(log(N)). This, to me, is a beautiful example of a marriage between number theory and probability; primes are these illustrious little deterministic fuckers that manage to look completely random at times.
Central limit theorem
[ "What are your favorite portable math books?" ]
[ "math" ]
[ "zlkhf4" ]
[ 14 ]
[ "" ]
[ true ]
[ false ]
[ 0.89 ]
I'll be doing some traveling soon. It'd be nice to bring a math book to work through, but I don't want to carry a large, heavy textbook. What books do you recommend that are physically small/light? Yes, I know about phones/kindles/computers; I prefer a physical book. I'm mostly looking for books at the intro to intermediate graduate level, but feel free to suggest others if they stand out! Dealer's choice! Some of my textbooks in undergrad were almost entirely definitions and exercises; you prove all the theorems yourself. I absolutely adore this style of book and would love to know about more examples if they're out there. They're also usually very portable.
To your Bonus is: by Paul Halmos. The book really is just problems that help you understand Hilbert Spaces better. ​ Here are a few other books that I really enjoyed that can also have a small profile: by George Andrews by Miklos Bona (Lots of problems and solutions) by Feeman by Rosenlicht by Natterer - "the Bible" of CT. Great to see some applications of Fourier Theory. by Duren and Schuster - basically read through this text while on the bus in graduate school. by Holger Wendland by Brian C Hall by Pedersen
I don’t go on walks, sorry.
I never leave home without the following books: - by Paul Joseph Cohen. - by Karen Ellen Smith. - by Walter Rudin. - by Otto Forster. - by Vladimir Igorevich Arnold. - by Gerald Budge Folland.
- Serre's: Course in Arithmetic (UG level), Linear Representations (UG level), Local Fields/CFT(Grad Level) - Tate and Silvermans: Rational Points on Elliptic Curves - Silverman'c Elliptic Curves (if you know are done with Tate's book) It is a standard book
Yes, they’re either in my car or in my suitcase.
[ "I think math might be clicking for me?" ]
[ "math" ]
[ "zlmn34" ]
[ 119 ]
[ "" ]
[ true ]
[ false ]
[ 0.83 ]
Ok so I (17m) feel like math is clicking for me, and im really starting to love it. For context, I was always ok in math, and I skipped a year of math in middle school. However, I had a mental breakdown in 8th grade as well as personal health issues, which meant I got an F in algebra 1, as well as all my other classes. For my high school career I was disheartened and so I never really cared about math nor did my best. I hated geometry as well as algebra 1. However, this year things feels different. My brother who is a math major himself has offered me Serge Lang’s book basic mathematics and it has really changed my perspective on math. I haven’t read too deep into it, however I thought of something I didn’t before, which is that it’s really beautiful how these rules and principles work. Like I’ve been memorizing rules and it’s really helped me. I care less about the numbers and more of how things work. I’m graduating early by a semester from high school, and I plan to take trig and statistics in January, and hopefully make it up to upper division math.
I care less about the numbers and more of how things work Welcome to the club. That's what it's all about
Keep it up but remember it’s ok to struggle. Everyone hits walls in math at some points. Resiliency is key to success. The “ah ha!” moments are wonderful, but the real trick is to not get bogged down or frustrated with the periods in between. Cheers!
Good for you! Continued success!
to read and to read again.
Strangely enough, I understood math much more after I studied for the SAT math section. Good for you, man.
[ "How often in your research are you confident that the thing you're trying to prove is actually true?" ]
[ "math" ]
[ "zljuzu" ]
[ 34 ]
[ "" ]
[ true ]
[ false ]
[ 0.9 ]
Just a thought I recently had: When doing exercises in lectures you are always 100% sure that the thing you are trying to prove is true and provable, even though the steps to proving it may not be obvious. But if you want to use a certain theorem in your research, a theorem that nobody has proven yet, can you even be sure it's true? Even if intuition holds and you can't find counterexamples, it just seems extremely frustrating to invest a lot of time into proving something where there is a counterexample you just missed. So do you only really start proving something if you know the very statement you are trying to prove is true for certain or do you just kinda try to prove it and see where this leads you to?
In practice I don't really make up a theorem and try to prove it. Usually I have some general questions and vague ideas of answers. Then I will have some computations/arguments and I try to figure out what the exact theorem is that this argument proves. For example I was studying some moduli space and wanted to know some basic properties. I knew of a method to compute where the singularities are, so I did this computation and found out that there are no singularities. Now I had a proof for the theorem that the space is smooth. Then later I found a map from this space to another moduli space that was known to be smooth. I checked to see if this map was smooth and it was. This implied that the original slace is smooth. So now I had a different proof for the theorem that was actually a lot easier. Nowhere in this process did I actually sit down to prove that the space was smooth.
When doing exercises in lectures you are always 100% sure that the thing you are trying to prove is true and provable, even though the steps to proving it may not be obvious. Not necessarily the case. I'm a big fan of giving students "prove or disprove and salvage if possible" (PODASIP) problems where if the statement is false they need to see if they can prove a salvaged version. A similarly good sort of problem is to give students a problem to prove and then ask them to generalize it; ideally there will be multiple generalizations as options. Now, to answer your question: it really depends on what I'm working on. Take for example, this paper where I and some of my students looked at applying total difference labeling to well behaved infinite graphs . Here, a total difference labeling is where one takes an undirected graph, and labels each vertex with a positive integer, and then labels each edge with the absolute value of the difference of its two vertices. We then insist that no two adjacent edges have the same label, no two adjacent edges share the same label, and no edge shares its label with its vertex. One is interested in given a graph G, the minimum k such that there is a total difference labeling of G with all labels at most k. In this context, very little was known about the topic, so when we tried to prove things about stuff, we really were going in very blind. In contrast, see this paper on the bounds on second largest prime factor of an odd perfect number . It is very likely that no odd perfect numbers exist, so any bound was in trying to prove is almost certainly true. But this isn't necessarily very helpful. So a more useful question isn't "Is what I'm trying to prove true?" but "are the techniques I'm using powerful enough to prove the sort of statement I'm trying to prove?"
It works at any level. The PODASIP term comes from PROMYS , which is a high school program. I also use the same idea in my undergrad classes. At the graduate school level everything you encounter is a PODASIP by default. Giving young students experience in how to deal with uncertainty is a huge advantage to their development as a mathematician.
When faced with a hypothesis which may or not be valid, at least one mathematician has tried this technique: "On Mondays, Wednesdays and Fridays, I try and prove it, and on Tuesdays, Thursdays and Saturdays I try to find a counter-example." I have no information about the success rate of this process.
I do this all the time.
[ "what kind of industry jobs are out there for pure mathematicians?" ]
[ "math" ]
[ "zlqqih" ]
[ 252 ]
[ "" ]
[ true ]
[ false ]
[ 0.95 ]
I'm an undergrad, currently looking at PhD programs. Pure math is generally rumored to have no value in industry. Is this true? Are there industry jobs which utilize pure math? Or, are there jobs where pure mathematicians are specifically sought after?
Pure math usually also.contains things like Functional analysis, graph theory, game theory, combinatorics, etc. All.of these have industry related jobs but there is one thing you must have no matter what math you learn be it applied or pure or statistics. Programming. The application of any mathematical theory in industry is primarily done through software. Learn any math you want but also learn programming and you will have a high level of skills in Mathematics (very few people in the labour force) along with programming (lots of people in labour force). This combination skill set will make you more competitive and allow for movement in the labour force. Or you can stay in academia but I would still suggest learning programming. Integrate it jntk your work will make you a better professor who is helping their students be better prepared. Do not get left behind by a world that is surging forward.
I mean you probably cant get w job as a mathematician but you could probably apply to jobs that require understanding of mathematical concepts. Statistics ans calculus are very important for applied engineering and plant operations, data analysis,and likely stuff like insurance. You will have to keep in mind that people would likely prefer a more specialized degree that focuses on applied knowledge, but they don't always have that luxury.
Highly suggest looking into departmens that have strong applied programs. It's a misconception that "pure" math is more rigorous or more "real math". In a good applied program you'll get to do pure math and learn some valuable tools for industry. There are various science roles (national labs, big tech, etc) that value skills applied mathematicians have.
I've being doing machine learning research (usually not developing new architecture and stuff, applying existing ML techniques to scientific research projects) for 7-8 years now. My CS skills are nowhere near that of a real SWE but teaching yourself enough to get by isn't hard. Pays much better than academia and gives me interesting problems. I rarely use anything from my pure math PhD but having that line on my resume got me in the door. The value in industry is the ability to walk into a room and say "Hi everyone, I spent some time with your data, I'm a mathematician and you can trust that I know what I'm talking about when I put graphs on the screen and say big statistics words, here's a paper about the method I used".
Adding on to this, I do “applied algebra”. I literally study algebraic curves over finite fields, and sometimes that leads to useful error-correcting code constructions. So like you said, not even that it’s no less rigorous than pure math, I literally do pure math for my work.
[ "Quick Questions: December 14, 2022" ]
[ "math" ]
[ "zlw630" ]
[ 14 ]
[ "" ]
[ true ]
[ false ]
[ 0.9 ]
This recurring thread will be for questions that might not warrant their own thread. We would like to see more conceptual-based questions posted in this thread, rather than "what is the answer to this problem?". For example, here are some kinds of questions that we'd like to see in this thread: Including a brief description of your mathematical background and the context for your question can help others give you an appropriate answer. For example consider which subject your question is related to, or the things you already know or have tried.
My friends say that is not a good method because the second person has 2/3 chance to end up in the team against the first person This is how it should be though. There are 3 other people, so person 2 has probability 1/3 of being on the same team as person 1. What probability does your friend expect?
Differentiability is a concept which works best on open subsets of R^n, rather than generic subsets. This is because openness lets you understand how a function behaves in all directions around a point; this is most vividly seen in the theory of optimization, say in the difference between something like Lagrange multipliers (extrema on a boundary) vs. Fermat's criterion for extrema on an interior. It's hard to be more specific without seeing what theorems you're referring to, but almost certainly any theorem involving differentiability you have will become much more complicated if you don't assume open domains, and any theorem just involving continuity will probably work well in very general situations.
To elaborate on the other answer the implication infinite field => characteristic 0 is wrong. The other direction is true. One example mentioned in the other answer is the algebraic closure of a finite field. The algebraic closure of a field always exists (I think you need axiom of choice to prove it). The characteristic stays the same. Then you can show that an algebraically closed field needs to be infinite: Suppose your field F has a finite amount of elements a_1, ..., a_n. Now consider the polynomial ((X - a_1) (X - a_2) ... (X - a_n)) + 1 No matter which element you insert into your polynomial you get 1 as value (which is not 0 because we require that 0 and 1 are different elements in a field). Therefore this polynomial has no roots in your field and therefore this field can't be algebraically closed. Therefore if you take your favourite field of characteristic p and construct the algebraic closure you now have an infinite field of characteristic p
Derp, those are 9s. Thanks, sorry for the double reply.
Look into probability distributions like the geometric distribution (if you try over and over again until you win, each trial having a fixed probability p, what's the probability that you'll have to try x times before winning?) and binomial distribution (if you try n times, each having probability p, what's the probability that you'll succeed in exactly k of the attempts?)
[ "Help: I’m not a mathematician, but I want to understand the different types of infinities and how I can picture them and explain someone else if I had to." ]
[ "math" ]
[ "zxep1g" ]
[ 10 ]
[ "Removed - ask in Quick Questions thread" ]
[ true ]
[ false ]
[ 0.71 ]
null
Hilbert's hotel is good at understanding aleph-null, not as much for other cardinalities.
Search up "Hilbert's hotel".
Hilbert's Hotel doesn't explain / compare different types of infinities. It talks about the infinity of the natural numbers and some counter-intuitive properties it has. But to OP, if you're interested in infinity and willing to slightly change the focus of the question, Hilbert's Hotel is probably one of the best places to start. It could be good basis to understanding higher infinities too.
It's basically impossible to picture them except for the smallest one. First, let's assume that you're are talking about infinite , rather than other type of infinity (e.g. infinite ordinal, infinite surreal number, infinity point, etc.). The cardinals are ordered in a sequence. The sequence is indexed by . There are 2 main sequences: The Beth sequence, which iteratively produce a sequence of cardinals by successively doing powerset. Because it can be explicitly constructed, you can give explicit examples of sets with that cardinal are at the lower level of the sequence; even then it will quickly becomes too complicated, and really only up to Beth_2 is in used often. The Aleph sequence, which just order all the infinite cardinal from smallest and onward. There are basically no ways to visualize them except for Aleph_0, which is the cardinal of natural number. There are almost no relationship between Beth and Aleph: Beth_0 =Aleph_0, Beth_n>=Aleph_n and there are some other small conditions, but for the most part which Beth correspond to which Aleph is an unanswerable question.
There are three types of "infinite" numbers that I am aware of, the ordinals, the infinite hyperreals, and the cardinals. Ordinals let us order things. Just as we can order a set of things to speak of their 1st element, their 2nd element, and so on, we also can, if the set is infinite, order its elements so that we have a 1st element, a 2nd element... all the way up to an ω'th element, and then an (ω+1)th element, an (ω+2)th element, and so on. ω is what we call the first infinite ordinal. This basically gives us two copies of the natural numbers we can use to order a really big set, ω being what we call the first infinite ordinal, coming "later than" any natural number. Note that there is no single number coming right before ω, it has no predecessor. All the numbers before ω are naturals, but there is no last natural coming right before ω. Even a huge natural like 100000000000000^100000 will have another natural coming after it, like (100000000000000^100000)+1, that will still be before ω. This means that ω is what we call a limit ordinal, an ordinal that has no immediate predecessor. In contrast, ordinals like ω+1 or ω+5 are not limit ordinals, since they have predecessors ω and ω+4, respectively. There are more limit ordinals that come after ω, which we use to order sets for which not even two copies of the naturals are enough. The reason we want to order large sets and why need ordinals is that sometimes we want to look at all the elements of an infinite set "one by one", which obviously we cannot do by actually going through the elements ourselves. By ordering the set we're interested in, using transfinite ordinals, we can analyze the set's elements by applying certain properties of the ordinals, namely induction (a technique used when we know something is true for a structure of a certain size, and want to show that it is true for a structure of any size). A cool application of this lets us construct infinitesimal numbers that we can use to make calculus easier. We do this by, in very informal terms, overlaying infinitely many copies of the real numbers on top of each other (something we call an "ultrapower"), using ordinals (through a theorem called Zorn's Lemma) to tweak them in just the right way so that infinitesimals appear. The number system including these infinitesimals is called the hyperreals, and apart from infinitesimal hyperreals also includes infinite hyperreals. These are a completely different type of infinite number and behave in a completely different way than ordinals and cardinals, but sadly I know very little about them. As for cardinals, I would also recommend looking up "Hilbert's hotel" for a simple explanation of how they work, and why we cannot simply use ordinals to measure infinite sizes. That said, something that is usually not mentioned is that these different sizes of infinity are completely relative: We say an infinite set has a larger cardinality than another infinite set if there is no mathematical rule (a function, to be precise) establishing a one-to-one relation between each element of the sets. This is most usually taken to mean that one set is truly larger than the other, but it is just as valid to say "these two sets are both infinite, and infinity only has one size, but one of these sets is too complicated to define a function linking it with the other." This interpretation is supported by something called the Downward Lowenheim Skolem theorem. What this theorem lets us do is construct a new version of the entire mathematical universe of set theory, in full detail, all a single set, a set with the lowest possible infinite cardinality, Aleph null. Looking at any subset of this mathematical universe from the outside, we see that it will have cardinality at most Aleph null, no larger infinities at all. But if we were to go inside this universe, and play only by its rules, larger infinite cardinalities would once again appear, simply because we cannot define, from within, the functions needed to show that every infinite set is in fact the same size. I hope this is useful and that my math wasn't too sloppy!
[ "What is your least favorite integer or decimal?" ]
[ "math" ]
[ "zxdo3v" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.61 ]
null
Anything with over 3 significant digits.
"The observable universe has a diameter of 93 billion light-years, which means it has a circumference of 292.16811678385 billion light-years!" Every redditor who wants to send me into an eye-twitching rage.
A decimal with more than 4 leading zeros.
Lol, I don't think I hate some decimals. I hate when am doing applied math and the float precision get messed up
Same thing - yes.
[ "Why di people say that you cannot factor x^2+4 with real numbers when you can?" ]
[ "math" ]
[ "zxca6y" ]
[ 0 ]
[ "Removed - ask in Quick Questions thread" ]
[ true ]
[ false ]
[ 0.27 ]
null
That is indeed a factorisation of x + 4, but the brackets are not themselves polynomials. When we factorise things, we generally only concern ourselves with factors which are the same kind of object as the original thing. 5, for example, is prime because the only natural numbers that it factorises into are 1 and 5, and we disregard the fact that we could also have 2 x 2.5, because 2.5 is not a natural number.
By that logic, you can factor any function f(x) as f(x) = (2f(x))(1/2). The term "factorization" only makes sense when you specify what type of factors are allowed. Without specifying that, the entire notion of factorization is a meaningless concept. When you're talking about factorization of polynomials, the convention is that the factors are also polynomials.
No, you just misunderstood what was being said. You can also factor it as (1)(x +4).
They mean factor it into nontrivial polynomials. Those factors are not polynomial.
thanks.
[ "Naturals should (not) be considered as a subset of the rationals" ]
[ "math" ]
[ "zx86oi" ]
[ 0 ]
[ "Removed - ask in Quick Questions thread" ]
[ true ]
[ false ]
[ 0.25 ]
null
A better way to convey what you might be trying to get at is to say there are useful concepts in N that become worthless when viewed in Q, e.g., divisibility. Much of elementary number theory is related to the divisibility relation on N or Z (such as factoring, primes, gcd, modular arithmetic), and the divisibility relation becomes utterly trivial in Q or R, since every nonzero rational or real number divides every other nonzero rational or real number: there are no primes in Q or in R. When working with divisibility in N, the fact that we are in N rather than Q matters a great deal. We simply the canonical embedding of N into Q rather than declare N is “not” a subset of Q. Among natural numbers, their divisibility relation as natural numbers and as rational numbers are : one is interesting and one is not. We don’t say there is no divisibility relation on Q, but we don’t make much use it simply because it is boring. The same thing happens in analysis. There are useful properties of some functions on R that don’t carry over to C: the function sin(x) is bounded on R but its complex analogue sin(z) is unbounded on C, and there are smooth bump functions on the real line but no analytic bump functions on C. In short, real analysis is not a special case of complex analysis. But R has a canonical embedding into C, and to declare real numbers are “not” complex numbers is just being argumentative.
Can you elaborate on the “We know that the naturals are not a subset of the rationals” ? I mean if n is a natural, n = n/1 and n/1 is quite clearly a rational.
In the usual set-theoretical foundations of math, the naturals are defined as the transitive, well-founded, finite sets, ordered by inclusion (this means that 0 is the empty set, 1 is the set containing 0, 2 is the set containing 0 and 1, and so on). The rationals are a completely different object: A rational is the equivalence class of all its representations as the quotient of two integers (which have their own convoluted set theoretical construction necessary to deal with negative numbers), as ordered pairs. So, for example, rational 1/2 is the set containing the ordered pairs (1,2), (-1,-2), (2,4), (-2,-4), and so on. Rational 1=1/1 is the set containing (1,1), (-1,-1), (2,2)… This is what OP means by saying that natural 1 and rational 1 are different objects.
Avoiding the circularity you are pointing out really isn't a problem though. You can always define an intermediate set N' first using the usual construction we would do for N, then define Z using the equivalence class construction from N', then define N as the image/range of the embedding from N' into Z. If we want we can use intermediate sets all the way through to defining R and then define N, Z and Q in terms of embeddings into R. From that pov a lot of your issue becomes to do with which sets do we want to take as the naturals, integers and rationals, the sets we get from the usual constructions or the embeddings into R? It doesn't matter so much which ones we pick so long as we're consistent.
What if you want N to literally be a subset of Z (or Q or R etc)?
[ "What's the derivative of a modulo?" ]
[ "math" ]
[ "zx2dtn" ]
[ 3 ]
[ "Removed - ask in Quick Questions thread" ]
[ true ]
[ false ]
[ 1 ]
null
By definition (well, my definition), we have f(x) mod g(x) = f(x) - floor(f(x)/g(x)) g(x), where floor is the "round down" function. This is something we can differentiate using the usual rules. I'll omit the "x" argument from now on. (f mod g)' = f' - floor(f/g) g' - (f/g)' floor'(f/g) g = f' - floor(f/g) g' * I can't come up with a cleaner form than that. Note that in * I used: floor'(y)=0 if y is not an integer and undefined otherwise. This also tells us that the domain of (f mod g)' is the intersection of: - domain of f' - domain of g' - g nonzero - f/g not an integer As expected, if g happens to be a constant the second term vanishes and we're left with just f'.
desmos can show you: https://www.desmos.com/calculator/lpocr2siii
Thank you! So this is what I understand: Basically, the derivative of modulos can be defined like this: Dx(c1 mod c2) = 0, where c2 and c2 are constants and c2 != 0 Dx(f(x) mod c) = f'(x), where c is a constant != 0 Dx(c mod f(x)) = -f'(x) * floor(c/f(x)), where c is a constant Dx(f(x) mod g(x)) = f'(x) - g'(x) * floor(f(x)/g(x))
Yes, but I want a formula, just like Dx(x ) = nx , then Dx(f(x) mod g(x)) = ? At least with Desmos we can see that Dx(f(x) mod c), where c is a constant = f'(x)
Desmos can show you the rest of your cases, too. But to express what you are seeing in desmos in the case of something like f(x) = mod(100, x^2) when x is larger than 10 (or less than -10) then f(x) is just the constant function 100, so its derivative is 0. When x is something like 8 or 9, we are looking at mod(100,64) or mod(100,81), which are evaluated as 100 - 64 and 100-81. When x is 6 or 7, this function would be 100 - 2(36) and 100 - 2(49). In general the function f(x)=mod(100,x^2) is f(x) = 100-k(x^2) where k is an appropriate step function. The derivative of this would be f'(x) = -2kx, with k still being a discontinuous step function. So, more generally, the derivative of f(x) = mod(c, g(x)) is f'(x) = -kg'(x) for an appropriately described step parameter k. Similarly, the derivative of f(x) = mod(g(x),h(x)) = g(x) mod h(x) is f'(x) = g'(x)-kh'(x) for an appropriately described k. (Describing this k is a messy piecewise junky expression that is not worth the trouble of trying to write out)
[ "Interesting Question I’ve thought of" ]
[ "math" ]
[ "zwxdvx" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.33 ]
null
There are summation formulas. The one for a range from 0 to n is (n*(n+1))/2
You would just subtract the summation of the smaller number minus 1. So for 5 to 20 you can fill in 20 for n, and then subtract by the same formula but with 4 for n.
Is there a formula for the same thing but starting at a number that isn’t zero?
The strategy to find sum[a,b] is to use the above equation to find sum[0,(a-1)] and subtract that from sum[0,b] so you get (b + b - (a-1) - (a-1))/2
Practical application: The amount of customers has linear growth: a*n+b, where n is a time unit. You want to know the sum of all customers within the first t time units. It is now easy to calculate using the summation formula.
[ "Everything about the life of Ramanjuan is fishy!" ]
[ "math" ]
[ "zwrkki" ]
[ 0 ]
[ "" ]
[ true ]
[ false ]
[ 0.31 ]
null
What's fishy about it? He was just a religious guy who really liked math and dedicated all of his thoughts and efforts to it
If you're implying that his math was given to him by supernatural means, then I'd call that a terrible insult to how much work he put in to become as skilled as he was. He was born with a good head on his shoulders, but only through years of hard work did he hone it to the point where he could make new advances in math, and no deity deserves credit for the work that he did.
He did know he was going to die. He returned home because he had tuberculosis.
And what's the problem with this? Is there nothing thay you are passionate about and would spend doing to your dying day? Just because you have never seen someone dedicating their life to mathematics doesn't mean it can't happen, or even that it is weird if and when it does happen.
When I mean fishy, I mean weird. His supernatural intuition sense, the way he consistently maintained that goddess DID come to his dream, and him just spending his last year with equations as if he already knew he was going to die and he should just write it down, everything just seems not ordinary, dont know how to put it better.
[ "June Huh, High School Dropout, Wins the Fields Medal" ]
[ "math" ]
[ "zxe7dx" ]
[ 290 ]
[ "" ]
[ true ]
[ false ]
[ 0.81 ]
null
Dropped out of high school, but went to college and both parents were professors. Then, was taught by a fields medal winning prof. Headlines like this are incredibly misleading.
"When in labor his wife caught him doing math" - I laughed pretty hard when I read this.
Yeah, but calling him a high school dropout and leaving it at that is disingenuous. He went to college.
Taught by fields medalist part doesn't seem that silver spoon-y consider it did happen in a 200 student class that collapsed to 5. His entrance to SNU though may rub a lot of Koreans the wrong way from what I know about the country. That said if anything this is actually an extremely common life trajectory for some of the greatest minds in human history.
I think Vladimir Voevodsky is a better example. Got kicked out of / quit high school. Got kicked out of / quit University. Had no degree, working in IT fixing printers or something, went to a math prof and ended up solving a problem he gave the kid to make him go away (or something along those lines). Ended up impressing some folks in the field and was admitted to Harvard’s math PhD program without ever applying. Later won the fields.
[ "Donald Knuth's 2022 'Christmas Tree' Lecture is about (twin binary) trees" ]
[ "math" ]
[ "zwon28" ]
[ 296 ]
[ "" ]
[ true ]
[ false ]
[ 0.98 ]
null
Knuth has become almost a legend in the world of computer programming Almost…
If he isn't a legend, who is?
Yes. That's the point the person you've responded to is making.
Of course. He cannot die until he finishes TAOCP.
it's knuth or dennis ritchie..., knuth falls more on the math side, ritchie on the ”invented/helped invent unix, c, and quite a lot of the roots of everything” ... of course, there's always alan turing, ada lovelace, charles babbage, admiral grace hopper...
[ "Reminder of 7^7/7!=2023 or $7^7 \\bmod 7! = 2023$ Are there other neat ways to compute 2023?" ]
[ "math" ]
[ "zwknyl" ]
[ 53 ]
[ "" ]
[ true ]
[ false ]
[ 0.83 ]
null
2000 + 000 + 20 + 3
2022 + 1 = 2023 What I think is neat about this one is it uses last year's value and the first natural number, 1
The neat trick is , wait till all the high school maths competitions are conducted They'll always have a question with 2023 or 7 (sum of digits of the number type questions )as their answer
Blasphemy. Let's set aside our similarities and fight to the death.
> 7^7/7!=2023 That can't be right. (Unless you mean 7^7/7 is NOT equal to 2023).
[ "Can somebody give me a breakdown of the Sparse Fourier Transform, from a math perspective?" ]
[ "math" ]
[ "zwuig8" ]
[ 43 ]
[ "" ]
[ true ]
[ false ]
[ 0.9 ]
Hello everybody! So I recently found out that the Fast Fourier Transform algorithm was “improved” to the Sparse Fourier Transform algorithm, which is quickly becoming the new standard. I understand FT and FFT from a mathematical perspective, but I have little to no programming experience. I’m curious as to how it took this long to find an improvement to FFT, and if it’s objectively superior or not. From what I’ve glanced over, it seems like an approximation that’s used when speed is more important than complete accuracy. But again, not really sure. Thanks!
I am working on FFT-based algorithms but not sparse so far. First, the FFT algorithm is extremely parallelizable, optimized (software and hardware), and available in any library, so even if you come up with a new faster algorithm, it may not be as fast in practice. If I remember correctly, sparse input FFT (different from sparse FFT) is worth it when the input has more than 99% sparsity. The sparse FFT is made for signal that have a sparse spectrum (so frequency sparsity) but seems to be worth it at any level of sparsity. My main issue was that the implementation is not available everywhere (I need a GPU implementation), I am not sure it can be parallelized as well (vanilla FFT is just a fancy linear combination of the input). Lastly, my worry is that I use FFTs for the fast convolution property (multiplying in Fourier domain is equivalent to convolution in regular domain) and I wonder if the sparsity could “break” that since we’d multiply many important frequencies with 0s. I am in biomedical imaging so accuracy is important.
In my previous life doing signal processing I used a somewhat sparse Fourier transform (the Goertzel algorithm ) because we really were only interested in the phase of some signals at a very specific frequency and this is really easy to implement on embedded systems. But IIRC you very quickly get more expensive than FFT when you increase the number of components.
I dunno but the sparse fourier transform wikipedia page is garbage. It looks like it just doing selective transforms at certain frequencies, which is really slow unless it is really particularly sparse.
Yes! So we are aligning multimodal images together, that is, images of the same person but out of different machines. The structure is the same in both images, but not the "style" (called modality), imagine aligning an x-ray with MRI data for instance. An old (~1995) way of doing that is by computing the mutual information between the images and finding the best translation/rotation with this distance. The only issue is that it is slow but my colleague found a way of doing that in the Fourier domain ( https://www.sciencedirect.com/science/article/pii/S0167865522001817 ). So we use big GPUs (e.g. A100 80GB of VRAM) to compute the FFT since it's easily parallelizable (In 2D, you do it row-wise, then column-wise), and the complexity remains O(n log(n)) with n the number of pixels. We need a lot of VRAM because 3D medical volumes are big by default.
Sweet. It's amazing what the the advancements are GPU development since I first started working with them in an ancillary role over nine years ago. I have access to an A100, but I don't get to play with it because it is used for other purposes and I'd get in trouble for using it. But I digress from the topic of Sparse FFT vs. FFT. I have an friend (an EE) that has used blackbox solutions for FFT to do some work for electromagnetic compatibility. Said it's really cool stuff for what he does. I want to learn more about it. I am sure SIAM has publications on it. I think I bought a book from the AMS for it, but I've bought so many STEM books lately that I've lost track of what I have bought. Then I am am interested in the libraries that provide implementations of FFT.. I am aware of the open source library FFTW. I know Intel's MKL (now part of openAPI) has an FFT implementation. I'm sure arm, ltd. also has an implementation in their performance libraries. I should talk to my friend at NVIDIA about their offering for FFT - he works in their math performance library group. So much good stuff going in the applied mathematics world right now.
[ "Topological data analysis vs functional data analysis" ]
[ "math" ]
[ "zxcyld" ]
[ 38 ]
[ "" ]
[ true ]
[ false ]
[ 0.89 ]
Hello, I had a question regarding these two areas of applied math/stats. Topological data analysis and functional data analysis. Topological data analysis in my limited reading uses concepts from algebraic topology such as persistent homology to find “holes” in the data that can be used for various tasks in data analysis and ML. The underlying mathematics is algebraic topology and abstract algebra. But I had a question regarding functional data analysis. This is the use of basis functions to estimate curves. As a general case of the traditional multiple linear regression model, we are essentially estimating a function f as a function of some basis functions. Popular ones include wavelets, Fourier basis and B-splines. Just like TDA had a solid mathematical prerequisite in topology and algebra, does functional data analysis require functional analysis?
But I had a question regarding functional data analysis. This is the use of basis functions to estimate curves. Well, that would be a specific application, yes. Functional data analysis is the analysis of -- that is, data lying in some function space. Typically, these data are smooth curves, in which case they might be represented by a set of basis functions. This makes computations especially easy, but there's no reason that functions to be represented in this way. Methods based on Gaussian processes are one example. does functional data analysis require functional analysis? If you want to understand techniques in FDA in any level of detail, yes, this would be helpful.
So what does “functional data” look like? And how is it different from time series? I have some data, and I did some traditional time series analysis, but after looking at it, it seems like functional data analysis methods could be useful, but I don’t know what constitutes “functional data”.
It's always good to learn background and theoretical knowledge on any subject if you have the time. You do want to consider exactly what your end goal might be. If you have a specific problem in mind, it might be better to find a primer on the topic specific to your area. For example, if you wanted to use functional analysis in to predict a financial market, finding a few papers written about the topic and reading their background information would be a better user of time than breaking open "An introduction to measure theory" by Tao. One of the biggest stumbling blocks people stuck on theory run into is that there is always more theory to learn. Don't be afraid to apply some methods while learning the background. If, on the other hand, you really like functional data analysis problems and want to investigate the methods there is certainly background to learn.
Desktop link for functional data analysis page
Lmfao I’ve watched a majorbbank use it to pass regulatory approval
[ "Re-reading Rudin for more ground-work or doing other stuff?" ]
[ "math" ]
[ "zwugfq" ]
[ 20 ]
[ "" ]
[ true ]
[ false ]
[ 0.87 ]
I'm a sophomore and I did poorly in my Real analysis ( could manage only slightly more than class avg.) Towards the end of the course, I fell terribly ill and so I really didn't end up solving a lot stuff from Rudin. I am just afraid that this is going to bite me in the ass later down the line. Should I spend more time with Rudin, doing more questions or should I just start working with the next stuff, revisiting when I need to?
Depends on what exactly you missed and what your long term plans are. Chapters 1-7 constitute the core material that anyone considering applying to graduate school should know. If you are considering graduate school then at some point or another you should make an effort to familiarize yourself with these chapters at some point, regardless of whether analysis is your thing or not. That being said, most undergrad sequences would probably take two semesters to get through these chapters. Without knowing what chapters your class covered it's hard to say how important it is to go back and catch up now.
Do both, in my opinion you will always have to revisit a lot of older mathematics until you finally get it.
Rudin was the textbook for my analysis course as well. I thought it was fine when accompanied by instruction but I honestly feel it is terrible for self-study. I'm currently reading Pugh's book to bone up on Analysis again and its terrific, I definitely recommend for self-study.
Rudin is fine, but I think you can look for a more modern book. I suggest these 2: Laczkovich, Miklós; Sós, Vera T. (2015). Real Analysis; Foundations and Functions of One Variable Laczkovich, Miklós; Sós, Vera T. (2017). Real Analysis; Series, Functions of Several Variables, and Applications
I think the chapters that I have problem with are Integration and Sequences and Series of Function that’s all. I wasn’t able to complete the problem sets in time. What do you think is optimal?
[ "Linear Algebra Lang vs LADR (Axler)?" ]
[ "math" ]
[ "zwpea4" ]
[ 16 ]
[ "" ]
[ true ]
[ false ]
[ 0.82 ]
I know that LADR emphasizes operator theory a lot more, and banishes determinants till the end, whereas Lang is more dense in content and quite terse. LADR claims to also have no prerequisite linear algebra knowledge and is quite easy to read. Which one of these two books is better from a self-study perspective, considering I am already really familiar with applied linear algebra (typical first course stressing matrix operations and computations) as well as VS and operator theory. I do not know if Lang's book is too difficult in a self-study perspective.
Axler wrote a short article called Down With Determinants! where he gives determinant-free proofs of some linear algebraic results. Basically, he feels that the determinant acts as a random tool that removes the intuition behind certain results and reduces them to facts that just kind of fall out of the computation. His book discusses determinants (they are undeniably important and useful), but actively delays discussing them until much later than other texts.
Lang's book is really exceptional if you really want to understand at an abstract level and have a reasonable handle on proofs and abstraction worst. But it's a lang book. It's dense.
Read rather than done right.
I can't speak on Lang, but Axler's book is phenomenal. His exposition is clear, and the proofs he provides are positively gorgeous. is an excellent text to read as a follow-up to a computational linear algebra course.
Because they don't work in infinite dimensions, and the crutch of determinants makes the jump to functional analysis much less smooth.
[ "Can an arbitrary amount of information be conveyed via a single bit flip, if the system of bits is arbitrarily large?" ]
[ "math" ]
[ "zwr4rc" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.73 ]
Hopefully this is the right place to post this. It feels like it's right on the boundary of computer science, mathematics, and information theory. Specifically, I'm thinking about the "Almost Impossible Chessboard Puzzle" and solution from here: (And yeah, the article title is ironic, because it's definitely not a "no math" solution just because it doesn't make use of mathematical notation; it's merely a mathematical solution that's simple enough it can be explained reasonably well using pure English.) I can understand the logic of the solution outlined in the article, but one thing that stands out is that, it seems like the amount of information (in bits) that can be encoded via a single bit flip is directly related to the size of the system containing the bits (or coins, or whatever). But that would seem to imply that I can just keep adding on more and more random states to the system, such as making it an 16x16 instead of an 8x8 chessboard, and then more information can be conveyed via a single bit flip, because now we'd need to answer 8 questions, or, equivalently, we'd need to use an 8-bit binary number to indicate the exact location of any particular square on the board. That doesn't seem right, because just adding more random states to a system shouldn't let me convey more information using the same strategy as before, should it? But then maybe it's not actually more information, since it's still just the location of a single square, even if we need more bits to represent it? What comes to mind is entropy (in the information theory sense of the term) and how larger systems tend to have larger entropy values, simply because there's a larger number of possible microstates they could be in. Is that actually what I'm getting at in regards to a larger chessboard puzzle resulting in location encodings with more bits than a smaller chessboard? It certainly feels related, and, IIRC, shannon entropy, given in bits, can be thought of as the average number of yes/no questions that'd we need to ask about a system in order to determine its exact configuration, or, equivalently, the minimum number of bits needed to specify the exact configuration. Hmmm...I suppose the entropy would then be, not the number of bits needed to specify the location of a single square on the board, but the minimum number of bits needed to specify the locations of the squares. I'm not really sure what to do with though. This whole thing is fascinating, but I feel like I'm missing something fundamental about it.
Here is a different (but equivalent) formulation of the question, and a different (but mostly equivalent) solution: The warden places before you n arbitrarily flipped coins in a row, and tells you a number from 0 to n-1. For simplicity, you can assume that n is a power of 2. You are allowed to flip any one coin in the row. (The solution works regardless of whether or not you are allowed to walk away without flipping any coin at all; see below.) Your friend then has to look at the row of coins and repeat the number the warden told you. Solution: bitwise XOR Perhaps what's confounding about this is that "random noise" is somehow used to communicate information, and "more noise" can communicate "more information". That's not what's going on here. The randomly flipped coins don't actually matter for the communication, the important part is that you have n possible choices you can make. In fact what the solution does is "filter out" the randomness, so that only the impact of your choice is visible to your friend. Consider yet another version of the puzzle to demonstrate this: The warden presents you with n empty cups, and tells you a number from 0 to n-1. You are allowed to place one coin into any of the cups. Your friend then looks at the cups and has to guess the warden's number. The solution to this variant should be obvious. And now it should not be surprising that allowing more "data space" that you can pass between yourself and your friend allows you to communicate "more information", even if you are only allowed to modify a fixed portion of the data space.
If there are n different bits you can flip, and you have to flip one, then you can distinguish between n different states. In the problem you linked to, for an nxn chess board there are n squares and n choices of coins to flip. So there is enough information to indicate the special square. The choice that you are making is which of the n coins to flip, so if n=2 then your choice conveys 2m bits of information.
Perhaps what's confounding about this is that "random noise" is somehow used to communicate information, and "more noise" can communicate "more information". Yes, this is what was throwing me off. That's not what's going on here. The randomly flipped coins don't actually matter for the communication, the important part is that you can make. In fact so that only the impact of your choice is visible That's the insight I wasn't seeing. Thank you!
When you choose which coin to flip, you are choosing between n different states. The number of bits of information this conveys is the number of yes/no questions required to distinguish between the states, which is log_2(n ) = 2log_2(n). If n=2 then this is 2m bits of information. The information is conveyed by which coin you are flipping. With a larger board, you have more options of which coin to flip, so your choice conveys more information.
flip the bit if the british are coming and don't flip if they aren't.
[ "What is your favorite memory relating to math?" ]
[ "math" ]
[ "zx1l9w" ]
[ 18 ]
[ "" ]
[ true ]
[ false ]
[ 0.86 ]
I want to hear your beautiful stories about math…indulge me!
There was a running gag in my calculus class during senior year of high school about the derivative of sec(x) being sec(x)tan(x). Basically we would always say that the derivative of "sex" is "sex tanks". Our teacher even ordered custom shirts for the class at the end of the year, which had two tanks on top of each other with the top one "firing". Underneath the tanks was the identity "d/dx ( sec(x) ) = sec(x)tan(x)". I still have my shirt, and am planning to wear it to my ten year high school reunion.
I took an 'Advanced Calculus' course in college. Kind of a middle ground between standard Calculus, and Real Analysis. It was my first real proof-based course. For the first month, it absolutely kicked my ass. I was on the metaphorical ropes, taking punches to the face every single day. In that first month, I successfully completed MAYBE 25% of the homework problems. There were assignments that I didn't even bother to turn in, because I was so embarrassed with only being able to partially solve 2 out of 6 problems, or things like that. Our first exam rolled around. It was surely going to be a disaster, for me. We were told that the exam would have 10 problems, 4 would be duplicates of our homework problems, and 6 would be new problems. (Or something like that... this was many years ago... I don't remember the exact breakdown of problems). The exam was around 3 PM. My plan was to skip all my other classes, and spend the whole day in the library, doing my best to memorize the solutions to all of our homework problems. I thought that, hopefully, maybe, I can cram all these meaningless words and phrases into my brain, and spew them out, and at least get a few problems right, by rote memorization. And then maybe one or two of the new problems would be easy enough for me to solve on the spot. And that if everything went well, and I memorize as much homework as humanly possible, MAYBE I can come out of the exam with a C-. I got to the library to start memorizing around 7:30 or 8 AM. I began writing shit out, in order to help it stick in my brain. Just simply copying the homework solutions that our prof handed out, into my notebook. One sentence at a time. After a little while, I got to a point of "Oh, hey! I don't NEED to memorize this particular sentence, because I understand how it follows from the previous one. If I see this problem on the exam, I don't need to regurgitate this exactly word for word, because this piece of it makes sense to me." Then a little later, another one of those similar moments, where I realized I don't NEED to simply memorize this thing. Then over the course of the next few hours, little by little, one sentence at a time, I started to realize that it all actually makes sense, and it's NOT all these meaningless words and phrases. I realized I can ACTUALLY follow the logic of it. I started that morning in a super stressed-out panic. I was depressed that I was going to fail this course, and almost feeling nauseous. My only hope was that I could cram enough shit into my brain that I might be able to get a marginally passing grade on the exam. But, in the course of about 5-6ish hours, it all fell into place, and 'clicked' for me. Eventually, I realized "I don't need to memorize ANY of this... it just... it makes sense." With about an hour to go before the exam, instead of borderline shitting myself in complete terror, I took a nice long walk around campus with my headphones in, listening to Pearl Jam - Ten, and relaxing, completely confident that I was going to rock this exam. I got an A on the test, and thoroughly enjoyed doing my homework for the remainder of the semester, and mostly all of my math courses that followed (PDE's can still go fuck themselves though). So, that's my favorite memory, related to math. That one little stretch of panic-studying where the world changed from "You're stupid and don't understand anything and you're a failure" to "You've got this, and you should definitely major in math" just in the timespan of one morning + afternoon.
I got the right answer. And then I was hooked.
My first calculus course, with a crazy-haired Russian who had a strong accent. He once wrote next to an incorrect answer I gave for one of his tests: " ". I did well on the rest of the test so it was fine, but it absolutely made my day, lol.
as a child, i was told to write a 3 by a teacher whilst learning how to write. i wrote a 3 but backwards. they told me to fix it. i wrote a proper 3 next to it, forming an 8. i laughed at it, but my teacher was quite disappointed
[ "Why is unknown —> unknown true?" ]
[ "math" ]
[ "zwlz4u" ]
[ 18 ]
[ "" ]
[ true ]
[ false ]
[ 0.82 ]
In Łukasiewicz Ł3 logic, unknown implies unknown is true. I have no idea how this makes any sense, especially given the other truth tables. I’m wondering why it’s like this, and what’s the logical basis for it? Thank you.
If it wasn’t true, then the other options are 1. “unknown implies unknown is unknown” which has the counter-intuitive consequence of “A -> A” not being a tautology and 2. “unknown implies unknown is false” which means “A -> A” is not only not a tautology but false for some truth assignments, which seems even more counterintuitive. So it is a decision made by the person who defined it based on something in the spirit of these (and I’m guessing many other) observations.
You are correct that the "->" connective, also known as the conditional or implication operator, is typically used in the context of modus ponens, which is a rule of inference that allows one to conclude the consequent of an implication from the antecedent and the implication itself. In other words, if we have a proposition of the form "A -> B" and we know that A is true, then we can conclude that B is also true. In Łukasiewicz Ł3 logic, the truth table for the "->" connective is designed to reflect this use of the operator in modus ponens. The truth values of the various combinations of P and Q in the truth table are chosen such that whenever A -> B is true and A is true, it is also the case that B is true. To answer the specific questions you posed: T -> F needs to be false because if A -> B is true and A is true, then B must also be true. If A -> B is true and A is true, then B must be false, which is a contradiction. Therefore, T -> F must be false. U -> F needs to be either false or unknown because if A -> B is true and A is unknown, then we don't have enough information to determine whether B is true or false. Therefore, U -> F must be either false or unknown. F -> U needs to be either false or unknown because if A -> B is false and A is false, then B can be either true or false. Therefore, F -> U must be either false or unknown. This comment was written by ChatGPT and most of the content is plagiarizing the post by /u/antonfire that you gave it as input. To the extent that the post contains "original" content, it is partly wrong (namely, the assertion that "F -> U needs to be either false or unknown"; it is actually true) and wholly unhelpful. Please just stop.
You are correct that the "->" connective, also known as the conditional or implication operator, is typically used in the context of modus ponens, which is a rule of inference that allows one to conclude the consequent of an implication from the antecedent and the implication itself. In other words, if we have a proposition of the form "A -> B" and we know that A is true, then we can conclude that B is also true. In Łukasiewicz Ł3 logic, the truth table for the "->" connective is designed to reflect this use of the operator in modus ponens. The truth values of the various combinations of P and Q in the truth table are chosen such that whenever A -> B is true and A is true, it is also the case that B is true. To answer the specific questions you posed: T -> F needs to be false because if A -> B is true and A is true, then B must also be true. If A -> B is true and A is true, then B must be false, which is a contradiction. Therefore, T -> F must be false. U -> F needs to be either false or unknown because if A -> B is true and A is unknown, then we don't have enough information to determine whether B is true or false. Therefore, U -> F must be either false or unknown. F -> U needs to be either false or unknown because if A -> B is false and A is false, then B can be either true or false. Therefore, F -> U must be either false or unknown. This comment was written by ChatGPT and most of the content is plagiarizing the post by /u/antonfire that you gave it as input. To the extent that the post contains "original" content, it is partly wrong (namely, the assertion that "F -> U needs to be either false or unknown"; it is actually true) and wholly unhelpful. Please just stop.
You could have unknown → unknown be unknown if you prefer that. The logic you‘d obtain is called Kleene logic (assuming you leave everything else as in Ł3). One thing that is not so nice about Kleene logic is that it has no tautologies at all.
I'm not familiar with Lukasiewicz logic, but let me give a rough principle that hopefully helps in this context, and leave working out whether it's actually useful as an exercise to the reader. Usually a fruitful way of dealing with the "->" connective is in terms of what it's , which is modus ponens. In a lot of contexts, A -> B is the weakest thing that combines with A to let you conclude B. Hopefully this makes sense: the bar for "have I shown A -> B?" is "have I shown something that, combined with A, would let me conclude B?" and no higher. Here I guess "weakest" means that, in the truth table, as long as you can adjust an F to a U or T or a U to a T and retain the property that A -> B and A together let you conclude B, then you "should". Otherwise whatever your truth table is for, it's something that's "too strong"; stronger than "A -> B" is supposed to be to do what it does. From this perspective, the real questions are: * Why does T -> F need to be F (why not U or T)? * Why does U -> F need to be U or F (why not T)? * Why does T -> U need to be U or F (why not T)? And in particular those "why does" would be rooted in the idea that A and A -> B let you conclude B. (And maybe some other ideas about what L3 logic is "for" or how one expects indeterminacy to behave.) There might be good relatively concrete answers to these questions.
[ "ChatGPT vs Mathematics Classes" ]
[ "math" ]
[ "zwnhtp" ]
[ 8 ]
[ "" ]
[ true ]
[ false ]
[ 0.6 ]
OpenAI has been producing a suite of tools for engaging in creative endeavors using AI. This includes , which can use a natural language prompt to produce a digital painting, and which handles more creative writing. I had seen a couple of YouTubers talk about using ChatGPT to write scripts for them, and it has the ability to handle some technical topics. I started to wonder; how would it handle proving a theorem. Just to test out it’s ability, I asked my wife, who is a professor of psychology, to provide me with a writing prompt that she gave her students. We fed it exactly into ChatGPT. The result? Within 2 minutes, a complete five paragraph essay was written, that hit on all of the points she was looking for in an essay. And it was than what she saw from most of her students. I fed in the most straightforward theorem I could think of; . I figured, if ChatGPT is going to get anything right, then it should be this one. There are a lot of resources online that have full proofs of the theorem, and it’s a theorem that every mathematician is exposed to as they are first learning proofs. It wasn’t a correct proof, but you could see it lurking in the language of the false proof that it wrote. ChatGPT collected the finite collection of primes, but it didn’t take their product. It simply took the largest and looked at the next integer. So, ChatGPT did well in producing language that would sound correct to a layman, but it was innocent of the underlying logic that would hold the proof together. . Also, some of my viewers got ChatGPT to give them a closer to correct proof and they shared it in the comments. What really bothers me is that what ChatGPT gave me a student that kinda followed along with the class, but couldn’t quite put a correct proof together. I wonder, if I was grading several of these, how many points I would give? For the steps that the chat bot gave me, That’s a lot of points for someone that put in zero effort. It would be even worse for a natural language essay. Now, OpenAI also has an automated theorem prover called GPT-f. But that isn’t written in natural language, and would take a bit more professional know how to actually use. Perhaps, the combination of the two is closer than we think? I honestly have no idea of how close or far away that is. From a professional standpoint, I could see the utility of having an AI partner in proving theorems. Here is a link to if you want to play with it. Edit: Corrected an error caught by
Your "but could quite put a correct proof together" should be "but could quite put a correct proof together". My impression from looking at the proofs by ChatGPT so far is that it resembles a student who has no serious understanding of the course material and is just throwing the bull to see what sticks.
The thing with ChatGPT is that it’s great with with natural language, parroting what it has seen, and combining elements in a natural sounding way, but it doesn’t employ logic. Of course, this is sufficient for the vast majority of human activities (which makes it so spooky) but doesn’t really get you through elementary proofs that vary the tiniest bit beyond what has been explicitly witnessed.
Yesterday Chatgpt was just giving me straight out fabricated theorems and fake references lmao.
Exactly this. It’s great at combining language forms and interpolating but not necessarily extrapolating. It’s still pretty amazing. But it’s better as an augmentative tool.
Designing new never before seen problems for number Theory, abstract algebra, real analysis, and so forth is very nontrivial. Especially if you also want those problems to be approachable to students. It’s one thing to make your own calculus problems, but another thing altogether for proof based courses.
[ "Quick Questions: December 28, 2022" ]
[ "math" ]
[ "zxeonx" ]
[ 10 ]
[ "" ]
[ true ]
[ false ]
[ 0.86 ]
This recurring thread will be for questions that might not warrant their own thread. We would like to see more conceptual-based questions posted in this thread, rather than "what is the answer to this problem?". For example, here are some kinds of questions that we'd like to see in this thread: Including a brief description of your mathematical background and the context for your question can help others give you an appropriate answer. For example consider which subject your question is related to, or the things you already know or have tried.
Are there any book/papers that make an inventories of the common type of proofs used in mathematics/history of maths? Type of proofs are for example: brute force, proof by contradiction, symmetry argument, cantor diagonal, unwritten proof because the margin is not wide enough? I have master in mathematics, but I am now working in the industry, and I was curious if there was a book similar to How To Solve It from Polia, but a bit more concrete, so that I could spot patterns/compose with the type of proofs. [Sorry if the question is silly].
Whitney embedding says <= 14. If you could embed in then by removing a small ball from inside the embedded sphere you'd have an h-cobordism from the exotic S to the standard S , so the h-cobordism theorem implies they are diffeomorphic, so >=9 (on the other hand see here for an argument that exotic S s immerse in ). On the other hand see Milnors , the introduction and Ch 9. Exotic spheres can be found as intersections of complex hypersurfaces (specifically Brieskorn manifolds ) with a small sphere around a singular point, and this produces embeddings of exotic 2n-1 spheres inside the 2n+1-dimensional standard sphere. Combining with stereographic projection this gives embeddings of exotic 7-spheres in . Apparently any exotic sphere with a codimension 2 embedding in a standard sphere can be obtained through this process. I can't find any reference which states explicitly that exotic 7-spheres admit a codim 2 embedding.
They are in bijection, specifically through pulling back along the homotopy equivalence. This follows from the fact homotopic maps pull back to isomorphic vector bundles.
This one is a standard text on learning patterns of proofs techniques: https://www.amazon.ca/Problem-Solving-Through-Problems-Loren-Larson/dp/0387961712
Is there any significance to the fact that the area under a Sin curve, between 0 and Pi, is 2? It seems oddly coincidental.
[ "What notation always gets to you no matter how much you use it?" ]
[ "math" ]
[ "zx1y6m" ]
[ 289 ]
[ "" ]
[ true ]
[ false ]
[ 0.96 ]
Anything that is never intuitive or quick no matter how much you use it.
Literally everything in differential geometry
Quantum mechanics lecturer would use small Phi for time independent state and big phi for time dependent state. He would then forget about his convention….
As the saying goes, differential geometry is the study of things that are invariant under change of notation.
Honestly, I feel like notation in math is like tech-debt on another level than anything other fields have to deal with. The trouble is that you get one super smart person trying to convey an abstract idea and only care about conveying that to most likely other super smart people who are capable of adjusting to whatever on-the-fly/arbitrary convention the first super smart person invented on the spot or bent/repurposed whatever and then everyone runs with that forever. It's not like in software engineering where there's code-review and a senior dev can say "nice algorithm, Gary, but for the love of all that is holy let's worry a smidge more about readability here?". And then people build on that system and it just gets compounded. Not to mention that what is convenient in one area of math conflicts with what is convenient in another and then when you're needing to do the same thing in different types of math, helpful conventions sometimes just go out the door and if you're new to something you get the sense that people are like "what problem?" Or you're hitting your head against the door until you finally work out that in the new area some things are just assumed etc. And then sometimes you manage to (temporarily) transcend into the smart space and you're thinking "what problem?" yourself, to any new person coming in. I don't know what the best solution here is that doesn't involve some deeply impractical aspects, but I really don't like things as they stand rn...
I never remember to put arrows over my vector variables🙃
[ "Anyone like the get high and think about mathematics?" ]
[ "math" ]
[ "wswi5c" ]
[ 900 ]
[ "" ]
[ true ]
[ true ]
[ 0.86 ]
Like the history, particular theorems or fields, stuff like that?
My brain when high isn't productive at doing math, but my creativity is increased and I occasionally get inspiration that I might later flesh out when sober. Doesn't happen too often, and the revelations aren't really high quality. They're certainly fun, though.
Never drink and drive. Alright, drink and derive it is.
Not high, but after a few drinks.
Richard Feynman did
I will go as far as to say: I did not think about or enjoy math until I began Medical Marijuana. My family has caught me reading my Calc text books for fun and understood right away that I was stoned.
[ "How to get into mathematical research?" ]
[ "math" ]
[ "wtbrkm" ]
[ 23 ]
[ "" ]
[ true ]
[ false ]
[ 0.88 ]
null
If I could send a message back in time to my grad school self, it would be to look at which professors have a track record of successful students - who’s churning out phd students left and right? Then, I’d go talk to them and see if they are working on something I could find interesting.
My question is how do you find this info
https://www.mathgenealogy.org or go to their website for CV
First you have to find an advisor or start going to local seminars and see if there are problems people are dealing with that you could help with.
+1
[ "Searching for help for dealing with Scientific Notation of really big numbers" ]
[ "math" ]
[ "wsxw1y" ]
[ 2 ]
[ "Removed - ask in Quick Questions thread" ]
[ true ]
[ false ]
[ 0.63 ]
null
Just say 2,36118 times 10 to the -64
And its a really small number
I guess what’s being implied is that there isn’t really a natural way that’s useful at all. For example anything last a trillion for a day to day person isn’t useful. But if you show them the scientific notation they would understand better. Someone familiar with scientific notation also would prefer you to just tell them scientific notation as numbers super big or super small past a point are just not something their going to easily catch onto. Whereas 6.34*10 is very useful to compare to other numbers and understand without memorizing names past trillion Edit: That being said, this post has taught me something new. I’ve only ever seen trillion and quadrillion written out. But I just googled what’s past them and its interesting because the prefix is increasing in order. Million prefix mono illion ending, billion prefix bi illion ending. Trillion tri prefix. Quintillion quint prefix. And so forth. Edit 2: ok so now I’m going through Wikipedia. It’s about very large numbers. But a lot of the ideas about usefulness also apply to very small numbers https://en.wikipedia.org/wiki/Names_of_large_numbers?wprov=sfti1
I guess what’s being implied is that there isn’t really a natural way that’s useful at all. Yes, I am fully aware since the start that this is not something of use, yet I still want to know of it just because I enjoy it
Assume you can convert the number x into scientific notation: x = 2.36118 * 10^(-64) You will need the exponent to be a multiple of 3. You can change the exponent by either adding or subtracting. You should always subtract from the exponent so that you will not have to pronounce a leading zero, which is not a significant figure. In this case you would subtract two from the exponent. x = 236.118 * 10^(-66) Now you can pronounce the number in three basic steps. Start with the part, which is an integer between 1 and 999 inclusive, before the decimal point: "two-hundred thirty-six" I would do the second part differently from your proposition: just say the digits in order: "point one one eight" This works well since there can be arbitrarily many digits here, and I have heard plenty of English speakers pronounce numbers like this. The third part is to convert the exponent, which must be divisible by three, into a 'number-word'... the biggest table on this webpage, which I expect is where you came up with the word 'unvigintillion,' shows you how to do this: first, if the exponent is m, then let n = m/3 - 1. Then you must use the power of etymology to convert n as in the first column of the table into a number-word as in the fourth column of the table. In this case, our number (66/3 - 1 = 21) is small enough that you can literally read it off this table to get 'unvigintillion'. Lastly, of course add "ths" since the exponent is negative. The end result is: "two-hundred thrity-six point one one eight unvignintillionths" I was studying this table to see if I could explicitly write down the number to number-word function, and I observed the following inconsistency: if f is said function, then since apparently f(101) = "uncentillion" it should follow that f(102) = "ducentillion" or something similar to that. However, the table says that in actuallity f(200) = "ducentillion". Hence while there are some clear patterns in the short-scale number words, they do not actually yield a function (yielding number-words) defined on all integers. I did not check the other scales. My expectation is that it is unlikely that there is a working standardization of such number-words in English that is widely used. Such a word generation feature is just too meta for English. You may find that some logical conlang, such as Lojban , has better features for pronouncing numbers like this.
[ "Is first-order logic the most fundamental part of maths? Does it epistemically precede all other fields?" ]
[ "math" ]
[ "wtp4ps" ]
[ 7 ]
[ "" ]
[ true ]
[ false ]
[ 0.77 ]
[deleted]
Certainly not. FOL is only one of many possible logics for encoding mathematical thought. There is quite a lot of work done on alternative logics and how well they seem to encode our basic ideas about what the mathematical universe be like.
There are many different alternative logics, consider modal logics, linear logic, higher order logic, intuitionistc / constructivist logic is another group. Realizability probably. Something interesting is also considering what foundations we are building logic on top of. We could consider the traditional axiomatic logic from the greeks or whatever. You could consider set theoretic foundations with situations and all that. You could also consider type theoretic foundations* which have picked up steam recently. Additionally you can build logic and set theory from topos theory. I think there are probably other category theoretic approaches to logic as well but im not a category theorist so i cant speak to them. *idk if it counts as defining logic. Like its more another interpretation of logic, or an equivalent formulation rather than another foundation.
Type theory, affine (or linear) logic. I can't explain them better than just stating some search terms, sorry.
Type theory isnt really a logic, or a specific thing. Type theory is more of a method or collection of similar methods for proofs. Kinda like natural deduction. Moreover, there are millions of different type theories, so saying "type theory" is no better than saying "logic".
Which alternative logics are these? (I had accidentally replied twice, it was probably a glitch related to internet connection)
[ "Returning to maths after a year" ]
[ "math" ]
[ "wsv7gg" ]
[ 5 ]
[ "" ]
[ true ]
[ false ]
[ 0.78 ]
null
i’m coming back to math after graduating with a BA in 2012 and being in software til last year. i hope it’s not impossible for you to get into a program because then i’d probably be hopeless
difficult but definitely not impossible. But your analysis needs to be absolutely bulletproof going into grad school because it lays the foundation for all of higher mathematics. Applied mathematics is very broad too but I would expect that you'll end up taking functional analysis and numerical analysis so just be aware of that. Take some time, brush up on analysis and talk to some faculty in the program and you will do fine.
What should I take after multivariable/vector calculus? A course in topology? I want to study relativity one day
linear algebra and analysis will be the next courses to take, definitely hold out on topology until after your first course in analysis/proofs, top. can get pretty abstract so its nice to have that base. And linear algebra is so freakin cool, take as many courses in linear algebra as you can, every field of math uses it.
yes, you will use analysis in EVERY field of higher mathematics and the primitive applied therein. I did lots of "applied" mathematics in undergrad and it was a lot of functional analysis techniques. Applied mathematics still uses a ton of theory.
[ "Is there a standard term for \"function whose domain is the first n natural numbers\"?" ]
[ "math" ]
[ "wsw9by" ]
[ 12 ]
[ "Removed - ask in Quick Questions thread" ]
[ true ]
[ false ]
[ 0.93 ]
null
Finite sequence
An n-tuple?
The set of n-tuples (a_1, ..., a_n) with a_k in some set C is in bijective correspondence with the set of functions f:{1, ..., n} -> C by the mapping F(f) = (f(1), ..., f(n)) with inverse F ..., a_n): k -> a_k. We don't really talk about n-tuples as functions, since it is more convenient to think of them as n-tuples, but they are the same thing. If you are familiar with l and L spaces, you might recognize that l is isomorphic to L depending on the construction, they could even be the same object. The notation C for the set of n-tuples of a set mimics (I believe intentionally, although I could be wrong about that) the notation C for the set of functions rom [n] = {1, ..., n} to C.
There is a name for a functor whose domain is the category [n] consisting of the integers 0,1,...,n and one morphism a -> b if a<=b, and that name is an n-simplex. But yeah, the better answer is that you can consider it a finite sequence of n elements in the codomain of the function.
That's irrelevant, the set of n-tuples and the set of functions from {1,2,...,n} are canonically isomorphic.
[ "Math as a game?" ]
[ "math" ]
[ "wsw6w8" ]
[ 6 ]
[ "" ]
[ true ]
[ false ]
[ 0.71 ]
null
Considering there are sandbox games with no real objectives where you just make stuff, sure, why not?
I actually teach math to kids, teenagers and people entering university telling them that math is like a game, and that you can't play a game without knowing the rules. So they have to learn to add, multiply and divide properly, along with how to operate with fractions and exponents, or they won't be able to do any of the more complicated stuff. And it ends being a pretty convincing argument for them.
I actually like this definition more than many others, and better describes it than it being a "science"
The more math I learn, the more certain I am that that’s exactly what math is. It’a a game of posing and solving challenging problems and inventing new tools and methods for solving those problems, all within the axiomatic framework, aka the rules. Everything else you can say about math is incidental.
That’s how I think of my research tbh — just fun times discovering what’s true or not :D
[ "Can someone tell me what is going on with this Number Cube I've put together? It feels of significance but it's escaping me." ]
[ "math" ]
[ "wt6wkc" ]
[ 3 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
null
The first image is the number reduction of the second image. The third image highlights a few of the sequences I've found.
I know that the numbers in the first image are supposed to be the results of adding the digits in each of the numbers in the second image (edit: actually the "digital root", where you add up the digits, then add up the digits of the sum, and so on until you reach a single-digit number), but what is the second image, exactly? I think that the rows are just Fibonacci-type sequences, where the sequence of the nth row starts with n and 2n and then proceeds based on the usual "add the previous two numbers to get the next" rule, but it would be really helpful if you could clarify what it's supposed to be. Also, it isn't really clear what the third image is supposed to be pointing out. Edit: looking back at this, I think I figured out what you were saying in the third image. The reason that the numbers in the 3rd, 6th, 9th, 12th, and 15th rows of the table in image 2 all have digital roots of 3, 6, or 9 is that all the numbers in those rows are multiples of 3 (more generally, all the numbers in row m are multiples of m), and if a number is a multiple of 3, the sum of its digits will also be a multiple of 3 (a classic divisibility test for base-10 numbers; I can write out the proof if you want.) So if you take a number in one of those rows, and add together its digits, you'll get a multiple of 3; if you add together that number's digits, you'll get another multiple of 3; and so if you keep doing that until you end up with a 1-digit number, then you'll end up with one of the 1-digit multiples of 3, i.e. 3, 6, or 9. Similarly, if a number is divisible by 9, so is the sum of its digits, so the digital root of a multiple of 9 will always just be 9. If you extended the table to include an 18th row made according to the same rule, the digital roots of those numbers would also all be 9. This also explains why the numbers in the 3rd and 7th columns of the 2nd column all have digital roots of 3, 6, or 9: in general, the numbers in the nth column of the table in image 2 are all multiples of the nth Fibonacci number, and since the 3rd and 7th Fibonacci numbers are divisible by 3, so are all of their multiples. Same goes for the numbers in column 11: that happens because the 11th Fibonacci number is divisible by 9. As for the patterns in columns 2, 5, 9, and 10, I don't have an explanation off hand, but I'll edit my comment again if I think of one. Edit 2: figured out what's going on in columns 5, 10, and 9. It has to do with this formula for the digital root, which relates a base-10 number's digital root to its remainder when divided by 9. (The formula actually gives a more general rule that works for any base, but I'll stick with the base-10 version here since we're just working with base-10 numbers.) Essentially, if a number is a multiple of 9 (has a remainder of 0 when divided by 9), then its digital root will be 9 (as seen in the previous paragraph); otherwise, its base-10 digital root will be its remainder when divided by 9. Now, here's some important facts about remainders: first, say that the number a has a remainder of c when divided by n, and the number b has a remainder of d when divided by n; then the remainder of a + b when divided by n will be the same as the remainder of c + d when divided by n. (In other words, you can find the remainder of a sum by finding the remainder of the sum of the remainders.) Second, the remainder of a * b when divided by n is the same as a * the remainder of b when divided by n. So we can explain what's going on in column 9 like this: the 9th Fibonacci number has a remainder of 1 when divided by 9, so it has a digital root of 1. The number in column 9, row 2, is just the 9th Fibonacci number plus itself, so (using the second fact about remainders from earlier), it will have a remainder of 1 + 1 = 2 when divided by 9, and so will have a digital root of 2. Similarly, the number in column 9, row 3, has a remainder of 2 + 1 = 3, and so will have a digital root of 3. More generally, the number in the nth row of column 9 will have a digital root of 1 plus the digital root of the number in the (n-1)th row of column 9, except when adding 1 to the digital root of the number in the previous row would get you 10, in which case you would "wrap around" back to 1. So this is why you see a repeating pattern of 1, 2, 3, ... 9. Similarly, for columns 5 and 10, they start with a number whose remainder when divided by 9 is equal to 8, and the number in each row after that is equal to the remainder of (8 + the number in the previous row) when divided by 9. When you have a number whose remainder when divided by 9 is r, then it's equal to 9q + r for some integer q; add 8 to that, and you get 9q + r + 8 = 9q + r + 9 - 1 = 9(q+1) + (r-1); in other words, the sum will have a remainder of 1 less than the remainder of the number you started with. This explains why you see a descending sequence of 8, 7, 6, ... 1 that then "wraps around" to 9 and then continues on as a sequence of 9, 8, 7,... 1 repeated. The pattern in column 2 also has a similar explanation: you're incrementing the remainders by 2 each time, so you get 2, 4, 6, 8, then 10, which "wraps around" to 1, then from there 3, 5, 7, 9, 11, except 11 "wraps around" to 2, and so on. Sorry if the explanation got a bit rambling near the end. If you're interested in this sort of thing, I would definitely recommend the book by David Burton--all of the stuff said above about remainders comes from the theory of "congruences" of numbers, which Burton covers quite thoroughly.
This is awesome. Thank you for the time and effort put into your reply. I'm definitely looking into David Burton and his theory. Much appreciated
Well, none of the content of the book is "Burton's theory", really--it's just a textbook, and all the big ideas should be credited to mathematicians like Euler, Gauss, and Fermat. Still a great book, though. Also, I should note that the book assumes you already have some familiarity with reading and possibly writing mathematical proofs. If you aren't already familiar, I'd recommend brushing up on some propositional logic (the second chapter of Richard Hammack's free "Book of Proof" is good for this) and proof techniques (e. g. the first 3 posts on this page by Jeremy Kun; the 4th one, on mathematical induction, is also worth reading, but the topic is covered in the first chapter of Burton's book so you can just read about it there.) This will help you get your bearings, but really, the best way to learn about proofs is to just read and write some yourself--and this is best done with a topic you're interested in,so have fun learning some number theory!
I'm stoked. Thank you for all the info. I was really advanced in Mathematics in school. Well. "Advanced". I was in Calculus in 10th grade. Kept getting in trouble for doing math in my head and not showing work. I'm just rusty as all hell. Been 20 years since college Statistics. Last real math I've done. Much much appreciated for all the info and links. I've been going down rabbit holes of mathematic Legends. And keep adding to the list.
[ "Where do ordinals, cardinals and the omega numbers fit in set theory and mathematics as a whole?" ]
[ "math" ]
[ "wtkzrs" ]
[ 15 ]
[ "" ]
[ true ]
[ false ]
[ 0.83 ]
null
https://www.dpmms.cam.ac.uk/~wtg10/ordinals.html This is a great introductory article to uses of ordinals in advanced mathematics. Generally, ordinals are useful when you want to do induction but your inductive process will take an infinite number of steps to finish.
I think you should replace "ordering" with "well-ordering" in your nice answer in the places where you just use the separate term "ordering", since sets can have lots of orderings that are not well-orderings, and ordinal numbers have nothing to do with those other types of orderings on a set.
Thanks! I was definitely a little lazy when writing, I'll add 'well' everywhere.
I can't answer your first question, but may help with the second. If you understand that cardinality arises from bijections between sets. Then ordinal numbers are the same but with order being preserved. So the order type, aka the ordinal number associated with a set, isn't just about the set, but about the set with some order. In fact you can easily "reorder" the natural numbers to give them a new order type. For instance, just declare that in your new order, 3 comes after all other numbers. Suddenly you have a set with order type omega + 1.
You didn't quite finish that yet, e.g., in 2, "these two orderings" should be "these two well orderings" and "they're different orderings" should be "they're different well orderings".' You say in 2 that "there is only one way to well order a finite set", but that's not true: a finite set with n elements has n! well orderings. You meant to say that all well orderings of a finite set are order isomorphic to each other.
[ "Fractional derivatives? Irrational derivatives (such as eth or Pith derivative)? What about derivatives that constantly change?" ]
[ "math" ]
[ "wts1id" ]
[ 68 ]
[ "" ]
[ true ]
[ false ]
[ 0.94 ]
In my undergrad I took real analysis where I covered all of calculus through proofs. Pretty interesting, and I learned/understood calculus much much better. I did well in the class, didn’t struggle as much and I genuinely enjoyed it. In my last day I went up to the professor and asked him “what happens if you take a fractional derivative like 3/4ths derivative? Can you even do that? What about an irrational derivative like the Pith derivative? What about an imaginary number like 3i + 1? How about a derivative that constantly changes, say we take the 5x/4 derivative where it’s dependent on x? Are any of these ideas useful or even used at all?” There was a line of students behind wanting to ask about the final, and they all thought my questions were actually pretty interesting. I would think about these ideas during class when I didn’t have much to do other than write notes. But even the professor seemed interested (later I found out he was a graph theorist, and wasn’t specialized in analysis). He just told me I’d learn that in graduate level analysis, and left it at that. Well I took graduate level analysis and I didn’t learn any of that. Can someone explain to me what these concepts are, how they work (do they work??), and why haven’t I learned about this yet?? Will I even learn about this?
Wikipedia has an article titled "Fractional calculus" which seems like it could be of interest to you. https://en.wikipedia.org/wiki/Fractional_calculus
Here’s an excellent SoME2 submission I was watching the other day . Gives a nice high-level view on what a fractional derivative is
One idea, not rigorous is that: Nth derivative of x is x *(m-n)!/n!. If we replace the factorials with gamma functions, we can define the nth derivative when n is any real rather than just an integer. Then, functions that can be expressed as a taylor series can be derivated like so. Idk about non-analytical functions though. Others have linked Wikipedia articles of what I'm imagining is a far more rigorous and better-thought idea than what I came up with after reading your post. It was just an idea.
There are a variety of different fractional derivatives that come up in a variety of applications. The classical "fractional derivative" is the Riemann Liouville derivative, which uses a fractional integral (a variation of the Cauchy iterated integral formula) and integer order derivatives. Significantly, it does the fractional integral first, then an integer derivative second. So if you want a 1/2 derivative, you perform a 1/2 integral then a full derivative. This is leveraged modeling the stress and strain relationships for viscoelastic solids, an application that arose around the 1960s, I think. However, initial value problems using the Riemann Liouville derivative are awkward, since you need fractional order initial conditions, which don't have a nice physical analog. Interestingly, if you differentiate first, then use a fractional integral, you get the Caputo Fractional Derivative, and there you have your usual physically realizable initial conditions. Each of these are , and so you can work with the resulting signal using the Laplace transform. From the linear systems theory perspective, that makes them especially nice. Remember that the Laplace transform of the derivative is multiplication by s in the Laplace domain. For each of these operators above, (1/2, pi, etc), neglecting initial data. There are also , such as the fractional Laplacian, denoted (-𝛥)^q, where we apply this by taking the Fourier transform of a function over R, then multiply by |w|^(q/2), then we apply the inverse Fourier transform. People use this for fractional diffusion problems, and , since they obtain their images by applying the filtered back projection algorithm (where the filtering is by (-𝛥)^(1/2)). I have an older video I made for my Differential Equations class that shows them how they can obtain a fractional derivative using some notions about convolutions and the Laplace transform. https://youtu.be/ClC4iA3N59A
If you took graduate level analysis, you should be familiar with the Fourier transform, and how it exchanges differentiation with respect to the spatial variable x and multiplication by the frequency variable xi. The intuition here is that differentiation sends a function to a “rougher” function i.e. one with larger contribution from higher frequencies. In general for one variable, differentiating n times in physical space corresponds to multiplying by xi in frequency space. With this in mind, define the fractional derivative of order s> 0 as the operator corresponding to multiplication by xi When s< 0, this operator is instead fractional integration, coinciding with the idea that differentiation and integration are “inverses”. For complex regularity s + i t, a little bit of complex analysis is needed but the idea is the same. These ideas go under the banner of harmonic analysis in service of PDE. As such its pretty much only taught in topics courses by those that work in PDE. Good references include Appendix A of Terry Tao’s Dispersive PDE and Elias Stein’s Singular Integrals/Harmonic Analysis.
[ "The Square-1 puzzle and Groupoids" ]
[ "math" ]
[ "wsv9mn" ]
[ 16 ]
[ "" ]
[ true ]
[ false ]
[ 1 ]
Just wanted to share a problem I've thought of for quite a while, but have no idea how to solve. We know that the Rubik's cube group is a group, because composition of moves is a total function. This fails for a general twisty puzzle. For instance, consider the . The angles of the pieces in this puzzle are different (The corner pieces are twice as 'thick') and so the cube is effectively bandaged. Thus the set of moves do not form a group, but rather a connected groupoid. For a Rubik's cube, there exists an algorithm called the Thistlethwaite algorithm, which solves the Rubik's cube by reducing the state space to smaller and smaller subgroups of the cube group. I would like to figure out if a similar approach can be done for a Square-1. To do this, I need to answer the following questions. Unfortunately, I have yet to find any resources for such a topic. Hence, I would like to ask if anybody knows about what I could read to get more insight on this problem. Thank you!
Every connected groupoid is equivalent to a group but that doesn't guarantee that the equivalence is useful. The equivalent group might be the trivial group, for example. I agree with you that the practical applications are extremely limited. The groupoid struture itself is too general to have a useful theory associated with it, but for puzzles where the associated group is big, that groupoid might have enough structure to be useful. Whether that structure is meaningfully different from the group, I don't know.
I'd like to conjecture there is no practical application of groupoids to real life, as the reason sthey are used in mathematics are rather technical (one must first battle with the fact that every connected groupoid is equivalent to a group, so why even talk about groupoids? I know several reasons, but I wonder if anyone has a particularly simple one.)
Quite a few puzzles are better represented as groupoids than groups, like the 15 puzzle for example. It’d simply not the case that any two moves can be composed.
every connected groupoid is equivalent to a group Is equivalence necessarily the correct notion to compare groupoids though? It's certainly unhelpful for describing these twisty puzzles.
Groupoids are playing a more relevant role in the description of the structure of physical theories. For instance, the use of groupoids is very convenient to describe systems with internal and external structures (see for instance [ 12], [13] and references therein). It should be remarked that a groupoid structure can be identified also in the considerations made by Dirac in the previously quoted paper [11]. Indeed, the composition law of the generating functions representing transformations allows to define a groupoid structure for the latter. Another instance of groupoid is provided by Ritz-Rydberg combination principle of frequencies in spectral lines as observed by Connes [ 14], where groupoids are connected with the structure of certain measurements, in this case frequencies of the emission spectrum by atoms ( source )
[ "Connections on orthonormal frame bundles and torsion" ]
[ "math" ]
[ "wt7kdz" ]
[ 34 ]
[ "" ]
[ true ]
[ false ]
[ 0.95 ]
I'm studying differential geometry and was looking for a proof of the fundamental theorem of Riemannian geometry in the language of principal bundle connections. The usual proof shows existence and uniqueness of a connection in the sense of a covariant derivative on the tangent bundle, but I find it terribly unenlightening. This is what I have so far: The riemannian metric allows us to reduce the frame bundle to an orthonormal frame bundle. Metric compatible connections are simply connections on this principal O(n) bundle. I believe the set of all such connections to be in one to one correspondance with their torsion. What I'm looking for is a way to prove this. The map from connections to torsion is clear: it associates a connection with its torsion. I would like to find a description of its inverse map, which should reconstruct, given a suitable tensor field, a metric compatible connection whose torsion is the given tensor field. Cheching that those two maps are inverses of each other would complete the proof. Could you please provide some description of said inverse map or point to a source that discusses this topic in this language of principal bundles and principal connections? Thank you all!
I'm sorry, what is the map between a principal bundle connection and torsion? I actually think your issue is not being clear on this. I'm only aware of torsion in the context of Lie algebroids, not general principal bundles. Linear metric connections are determined by torsion because the difference between any two such connections is a torsion tensor, by which I mean a skew symmetric TM valued (2,0) tensor, a TM valued 2-form. And any metric compatible connection plus such a tensor is another metric compatible connection. The important part of why this is true is skew symmetry. Once you pick one connection (in a general principal bundle), the space of all connections is identified with the space of all tensors of some appropriate type. This is again due to "the difference between two connections is a tensor" business. This identification is NOT canonical though, you have to pick one connection to do it. Here is perhaps a neater/more satisfying view of torsion. A linear connection on a vector bundle E gives you a way to extend the exterior derivative "d" from real valued forms to E valued forms, let's still call this operator "d". So consider the identity map Id as a TM valued 1-form. The torsion of the connection is d(Id), a TM valued 2-form. Being torsion free then seems like a very natural condition: it's the assumption that the identity map is a closed form. I should add that of course some people work with metric compatible connections with torsion, like in nonKahler geometry. You could perhaps come up with some funky way to view it in a context where the space of skew symmetric 2-forms is identified with the lie algebra o(n). Again, this identification isn't canonical.
For a Cartan connection (such as in Riemannian geometry), we define the torsion as being the projection of the curvature from the associated g bundle (g being the Lie algebra in question) to the quotient bundle g/h.
It can be nice to find invariant interpretations of these results, but actually I think the most philosophically natural way of understanding the fundamental theorem of RG is as a local, linear algebra result. Please see the attached partial notes (of the first few lectures) that I took of a course taught by Simon Donaldson on Riemannian geometry. His course started with the following phrase: "We begin with the most natural object in differential geometry. A k-jet of Riemannian metrics is..." From this perspective the classification of torsion is straightforward in terms of the representation theory of certain tensor produces of symmetric powers (explained basically in the k=1 jets section). Basically you can show that for a metric compatible connection in a sufficiently good local coordinate system (which exists from the analysis of 1-jets) the Christoffel symbols must have a symmetry property which (because of an isomorphism S (V) x V -> V x S (V)) implies that their value is determined entirely by their symmetrisation. The latter is exactly the torsion tensor, so each choice of torsion tensor determines a unique metric compatible connection. This calculation can be done entirely on the level of germs using representation theory. He also gave an amazing characterisation I hadn't seen elsewhere of the LC connection on the cotangent bundle as (d,Killing).
This is a subtle fact that can be explained using representation theory. But it’s easier to do it with respect to a local frame or local coordinates. There’s a famous sequence of 6 equalities needed, called Cartan’s lemma. I don’t know of any simple proof without using indices.
It would be difficult to explain all the ideas here, but the fundamental theorem of Riemannian geometry can be phrased in terms of the Spencer sequence associated to the group O(n) (and its standard representation). In particular, the following two facts: Once everything is phrased in this language the proof is pure linear algebra. These two facts correspond to: These two facts together are equivalent to the Fundamental Theorem. The area of differential geometry that deals with these ideas is called the Method of Equivalence. It works generally for any geometric structure (G-structure) on a manifold. It is mainly due to Elie Cartan. One good place to read about it is Robert Gardner's nice book "The Method of Equivalence and Its Applications".
[ "Research Areas which combine Computer Science with Physics?" ]
[ "math" ]
[ "wt1xjv" ]
[ 21 ]
[ "" ]
[ true ]
[ false ]
[ 0.86 ]
Throughout my high school i loved physics very very much. I always wanted to a PhD in physics. Along with i thought of pursuing space science. But due certain reasons i ended up in Computer Science. I have started with the third year of my bachelors degree. I don't hate computer Science, it's just that i loved physics a lot Are there research areas (I have always wanted to do research) which combine Physics and Computer Science? I am really excited in domain particle physics, I am aware I sound naive ​ 1)I talked to some of my friends who are good coders and they suggested me to get into making apps that simulate certain physics fundae. But that is not what i want. I would want to stress again that i liked actual physics. So i don't think i would enjoy just simulating stuff. I am more of a research kinda guy not an application oriented person. 2)Once again i want to stress that it is not at all that i hate computer science or i consider it bad. It's just that i want to do what i loved more and rather than completely changing my stream (which is difficult as well as risky and above all makes me guilty conscious as i would waste four whole years), i want to strike a balance between what i want and what i have.
What about quantum computing? It's a lot of comp sci with quantum physics under the hood
I have seen a number of physicists in computer science departments doing research in medical imaging
Not really true. There's different flavours of course, but most of it leans more on the computer science/information theory aspect than physics. In fact, a lot of times in research you can skip thinking about how a qubit is implemented, and just think abstractly about a qubit. Ymmv,however. Besides quantum computing, there's also quantum communication (my field of expertise).
Depends. If you're doing research on quantum computing then yes. People working for quantum computing companies however seem to be clueless on quantum physics and how it actually works.
Something not yet represented here is that there is an ever-growing research area that lies at the intersection of Statistical Physics, Theoretical Computer Science and Probability. It studies spin systems and algorithms related to them using ideas from physics and computer science. This book is basically a classic in this area (although as a heads up, this is fairly advanced stuff).
[ "Balls and Bins Question" ]
[ "math" ]
[ "wt3fq3" ]
[ 8 ]
[ "" ]
[ true ]
[ false ]
[ 0.84 ]
We throw balls one after another into bins. On each throw, the probability that the ball lands in bin is and independent of other throws. Suppose is the probability that after throws, the of non-empty bins is ⊆ {1,..., }. Is there a way to easily derive this distribution? (If someone could just point me to a reference, that would be sufficient. Thanks.)
To fix the issues you pointed out in other answers, you need to use the principle of inclusion-exclusion. For a given set S, let g(S) denote the probability that the collection of nonempty bins in a subset of S, and let f(S) denote the probability that the collection of nonempty bins is S. The other answers have told you how to compute g: you let p(S) denote the sum of the probabilities in S and then g(S) = (1 - p(S)) . The principle of inclusion-exclusion lets you compare f and g to each other. You have f(S) = sum over A being a subset of S of (1) g(A).
Then if n>=m, the probability would be m! times the product of all p_i's for bins inside s (ensuring that all bins get hit and accounting for all orders in which they could get hit for the first time) times the sum of those p_i's to the power of n-m (ensuring all the other shots go somewhere in S)
Thank you for the answer. The inclusion-exclusion principle did occur to me, but it won't quite do in my case. When I said "easily derive" in my question, what I meant was something that is algorithmically efficient to compute for a given . More precisely, computable in polynomial time w.r.t. or . Summing over subsets of will not scale well unfortunately. Maybe what I'm looking for is impossible; I don't really know.
The complement event has probability 1 - q_S and this is the probability that S' are all empty. What's the probability that ball i misses all the bins in S' ?
As another commenter alluded to, missing all bins in is insufficient as one may also miss some bins in . I need the event that every bin in is hit at least once and every bin in is missed.
[ "If you could be an expert in any field/theory of mathematics, what would it be and why?" ]
[ "math" ]
[ "wtq6zw" ]
[ 267 ]
[ "" ]
[ true ]
[ false ]
[ 0.95 ]
Ok for me its easy, geometric topology because 1) Its literally what I'm studying for the 'ol PhD and 2) its fucking cool and there's just so much more to learn. But what interests you? If you could be an expert in any field what would it be?? And why??
The field with one element
The Langlands program, because it would mean I know a large number of very different looking areas extremely well
Id say type theory. I already love type theory and really respect the community. Moreover it would be exciting to be at the forefront of computer aided proof assistants. Maybe also category theory, cause fuck learning category theory. If i could skip that pain i would
Operator theory - always thought the intersection of complex and functional analysis quite beatiful. Although it seems the research in that direction doesnt seem to be too activate.
Algebraic Geometry, it’s used all over the place, and is a lot of fun in its own right.
[ "Designing an arithmetic challenge game" ]
[ "math" ]
[ "lwlg9a" ]
[ 3 ]
[ "Removed - see sidebar" ]
[ true ]
[ false ]
[ 0.81 ]
null
Here's a few ideas: Effective, simple, but might be too disheartening for kids that get one wrong trying to do it the right way. Roughly the same thing, really, but might feel less punishing to kids that get one wrong. ​ This takes away the incentive to spam guess almost entirely, since the only way to get a high score would be to get 20 right, with time acting as a tiebreaker, presumably. Now you're incentivizing speed over accuracy. This makes guessing the answer harder since the solution will be something between two values with equal probability of all numbers, rather than being a normal distribution. It doesn't solve the root problem, but does work pretty well. Hope one of these works!
Wow thank you for all the excellent ideas! Yeah I do not want to deduct points for wrong answers so as not to discourage. Answer streak seems like it won't solve the problem directly. Really love your suggestion about changing the distribution! I didn't think of that before and now it is plain to me that would make it much harder to guess! Can you explain your 20 long suggestion? I currently allow players to customize how many numbers they want to add up. Then they press Start and are given three rounds using that setting. The total score from those three rounds ade then added up to their running monthly score.
Why not just take points off for inputting the incorrect answer? Maybe even reward streaks of correct responses.
Thanks for replying! Taking points off may discourage kids from trying out harder settings of the game to advance their skill so I avoid that.
you can always give more points to correct answers: like +5 for correct and -1 for wrong. Just a way to discourage guessing. The other option is to allow only two possible anwers for every question. If the first is right 3 points, if the first is wrong and the second is right 1 point, if both are wrong 0 points.
[ "How to explain symplectic geometry?" ]
[ "math" ]
[ "wt9rsz" ]
[ 139 ]
[ "" ]
[ true ]
[ false ]
[ 0.98 ]
I would like to know how you would explain a non-math person what symplectic geometry is, and why it is worth studying.
This is how I understand it (but I am from physics, so please correct me if I'm wrong). If you have a system that can be defined by some variables, you might want to consider the space of all possible trajectories for that system. In other words, you want to construct a playground where you can consider all the possible motions of the system. To do this, you start tracking numbers associated with the state of the system. In classical physics, where systems are made up of some number of particles, normally denoted N, the most naïve idea is to start by tracking the position of all of them, this is called the "configuration space." Unfortunately you immediately run into a problem. You see that the position is not really enough information to study the trajectories because a particle can be the in same position but moving in a different direction. If you think about the possible trajectories/evolutions in the space of positions you get a disgusting criss-cross of curves at every point, your space isn't expressive enough. So you might think, what if I track the position and velocity of every particle instead of just the position and double the cardinality of my space. Since classical mechanics is driven by a second order DE, it actually works out really nicely. If you have a space of dimension 6N, 3N for the position and 3N for the velocity of the particles the allowed motions of the system are no longer intersecting. You get this nice property that every point in that space is assigned to precisely one allowed motion, the space is really a bundle of these trajectories packed together cleanly. These kinds of spaces, where each point immediately and uniquely defines a trajectory for the system are called symplectic manifolds. Studying various cases of them under different constraints is called symplectic geometry. it's, in fact, a slightly more generic and extendable picture if you consider the momentum rather than the velocity, but this is irrelevant for understanding the idea
If you can explain phase space to a layman, it's not a big leap to explain symplectic geometry.
This is a good start, but I'd say it only motivates phase space, and not the symplectic structure that we care about. The really insightful bit (which you certainly know as a physicist, but our hypothetical layman presumably does not) is that there is a function called the Hamiltonian, which is basically the energy. We use this energy function to get equations of motion, i.e., Hamilton's equations. It turns out that the flow from any point in the phase space is completely determined by the Hamiltonian in a way that is very intuitive: The momentum tells us how quickly the position changes, and a potential gradient tells us how quickly the momentum changes (like a ball gaining speed as it rolls downhill). Now, the big idea that motivates symplectic geometry is that there is always some system of coordinates in which Hamilton's equations have a certain, "canonical" form. And if we imagine many nearby points in our phase space and watch a "fluid" of points flowing as time passes, the volume of that fluid is preserved (cf Liouville's theorem)! The way we measure the volume of a phase fluid is called a symplectic form, and symplectic geometry is precisely the study of 2N-dimensional spaces endowed with a symplectic form. ETA: This is all a bit abstract without examples. The simplest one is a harmonic oscillator, i.e., a mass on a spring. The phase space has the extension of the spring on one axis and the momentum of the mass on the other axis, and the trajectories are easily seen to be ellipses. We can change our coordinates so these trajectories become circles and normalize the energy so any given trajectory becomes the unit circle. Ta-da! We get the canonical form dq/dt = p, dp/dt = -q.
Symplectic geometry is the correct, and remarkably, geometric, formalism to describe dynamics in the Hamiltonian framework. That is, you want to develop a theory which correctly encapsulates the differential equation which arises out of the equations of motion of a Hamiltonian function, which represents the energy of a physical system in a given state. By an incredible coincidence, the minimal structure you need to describe this is: a space whose points can be thought of as "states" and a symplectic form on each tangent space to this space which is closed and non-degenerate. This is absolutely incredible, and one of the most geometrically beautiful stories of 20th century mathematics. From this, and a remarkable theorem of Darboux, you can prove the existence of local coordinates which look like position-momentum coordinates of traditional classical mechanics around any point in the space, which you therefore think of as "locally a phase space". In these coordinates, given a smooth function H on the space, the geometric flow along the gradient vector field of H, which is defined using the non-degeneracy of the symplectic form (dH is a one-form and can be identified with a unique vector field using the symplectic form), takes exactly the form of Hamiltons equations (again, remarkable). It is because of Darboux's remarkable theorem (existence of Darboux coordinates, and by extension Moser's trick and the simple but powerful assumption that the symplectic form is closed) that this is all the structure you need to capture this theory. Why is sympletic geometry so well-behaved? I think it has something to do with the fact that, once you have this minimal structure (a symplectic structure), but just by the above happenstances, many principles which we know work very simply in physics (such as the principle of least action, how conservation laws tell us about the way physical systems will develop, etc.) must be reflected somehow in the geometry of the underlying symplectic manifold. Translating ideas like this into mathematics produces more beautiful theorems in symplectic geometry (fibres of integrable systems are tori, Lagrangian submanifolds have embedded neighbourhoods which look like cotangent bundles, etc.). A deeper understanding of this would follow from reading Arnol'd's .
It's the geometry of Classical Mechanics.
[ "Is self studying math worth it - requires hard thinking" ]
[ "math" ]
[ "wsv0w9" ]
[ 18 ]
[ "" ]
[ true ]
[ false ]
[ 0.83 ]
Hello, I’m very interested in math but I haven’t studied anything advanced. I’ve only self studied single variable calculus, some multivariable calculus, linear algebra, intro to mathematical reasoning and that’s all. But I really like it. I think math is important and I really believe math will improve my thinking skills and make me a better programmer. I marvel at the relationship between physics and math. So I think I have an appreciation for mathematics. But I can’t make the final step to do all the work and the hard thinking that math requires. How do I make this final step and sit down and study and think and prove statements? Im not entirely lazy. It seems like the benefits (ie improve thinking skills) are so abstract that I can’t get myself motivated. Any tips on how to take the final step and study either an analysis or munkres topology textbook (first three chapters)? Thanks
If you want to learn math in order to become a better programmer, I highly suggest learning math specifically for programming. You could spend a thousand lifetimes learning all the pure math there is, but it may not make you a better programmer if you don't put it in context
While I agree with most of what you said, my point is that just because you are an incredible mathematician doesn't mean you will be a good programmer, and there absolutely is math made for programming. It's a multibillion dollar industry, you really don't think they've made new math for their purposes?
Self study. I just posted the function IsogonalConjugate . I originally had much, much longer code, and I was still debugging it. But while researching this I saw the note that the IC is the circumcenter of the reflections. Works in 2D and 3D. Since I had code for reflections and circumcenters, the code suddenly became a one-liner and the code was done. Math is chock-full of clever properties. Seneca: “Luck is what happens when preparation meets opportunity.” Often, in math, you just need to know something exists. As a programmer/mathematician, almost all the big gains are in linear algebra. Just about every matrix technique is useful for real-world tasks.
Math requires hard thinking - truer words have never been said. You have to accept and get used to that or you'll never get anywhere in this subject. Self-studying takes a lot of discipline and the progress and rewards are often slow and small, but if you stick with it, it adds up. The youtuber The Math Sorcerer has a lot of videos aimed at self-studiers. He's got a lot of passion and watching him can be inspiring and get you in the mood to go for it. Here's one called The Dark Side of Self Study, which sounds more dramatic than it really is. It warns of some of the common negative issues self-studiers face, like feelings of inadequacy, difficulty focusing, etc. https://www.youtube.com/watch?v=g7MSfHEdxXs&t=552s&ab_channel=TheMathSorcerer This one from last week is called How to Self-Study Math - https://www.youtube.com/watch?v=fb_v5Bc8PSk&ab_channel=TheMathSorcerer I got my BS and MS degrees in math, became a high school math teacher, and now I spend some of my free time self-studying various grad math topics for pleasure. Having a regular job and various life responsibilities makes it challenging to keep consistent, but in a lot of ways it's more enjoyable than when I was studying in school.
A recruiter from an IT company once told me that they really liked math students because “you can teach every one to program, but you cannot teach anyone to think”
[ "Can someone explain why the integral from 0 to infinity of (x^a/e^x) = a! ?" ]
[ "math" ]
[ "lwjs6i" ]
[ 92 ]
[ "Removed - post in the Simple Questions thread" ]
[ true ]
[ false ]
[ 0.97 ]
null
What you have discovered is the gamma function 𝛤 (z+1), which is the continuation of the factorial z! to non-integer values (specifically, to complex numbers). You can prove that $\int_0^{\infty} x^{z-1} e^{-x} dx =: 𝛤 (z) = z!$ at integer values of z by integration by parts, namely differentiating x^{z-1} and supposing e^{-x} is a derivative to reduce the integral to (z-1)(z-2)...(1).
One thing I would like to add is that the analytic continuation of the factorial has poles at the negative integers. So even though Gamma(z+1) function is, in a sense, the best and most useful extension of the factorial to non-integer values, it's not perfect. That said, it would be interesting to know the value of the residue at the poles. Maybe it there is something interesting we can say about them after all. EDIT: after a quick google search I found that the residue of Gamma(z+1) at -n (where n is a positive integer) is (-1) /(n-1)!, which is a pretty neat result.
"Feynman's derivation of the gamma function" is my favorite "proof".
You can integrate x / e by parts (if a is natural at least)
At the same time, while it has poles it is unique up to log convexity which is a reasonable property to give to an extension of a factorial function considering how factorials grow.
[ "Becoming a mathematician?" ]
[ "math" ]
[ "lwgmwn" ]
[ 15 ]
[ "" ]
[ true ]
[ false ]
[ 0.86 ]
null
I should have specified more I guess. I was referring specifically to masters programs. You are right about most doctoral programs.
Work at subway until you solve the twin prime conjecture But more seriously don’t give up. Even if you can’t do it professionally, you are closer to the secret than most. Keep reading, share with people. The Teaching Company has a lecture you can torrent “Joy of Thinking-The Beauty and Power of Classical Mathematical Ideas” Start a YouTube channel??? Operations research uses very little math, but something like constraint based programming is essential in things from laying out paths in microprocessors to string theory. Queuing systems are invaluable in cost savings for help desks, and other services, whether automated or not, to determine the amount of resources to utilize to handle a flow. The Langlands program is still open. Amazingly there is multiple realizability to cohomology 2, embedded, or a point as a sheaf..... No one knows.
The Laglands program is still open. Didn't Fargues and Scholze just prove it the other day?
You love mathematics, but it's hard to really know what the life of a mathematician is like until you get into grad school. It is tough and intense. You can't just read proofs, you need to know how to prove everything yourself if you're going to succeed. I still love it, but in my case it was not worth the sacrifices to make a career out of it. You have one year left to try to turn thins around. Good luck. Let me give you some ideas on the troubles ahead. Have a read of my blog how I became a cryptographer , which talks about my rigorous journey through mathematics and cryptography. It was HARD and painful. Then ask yourself, are you willing to make the same sacrifices? If yes, then go for it, and hopefully my blog gives you some tips on how to get into grad school (like my cross country journey just to talk to the great Carl Pomerance). Good luck.
Their work is very impressive but they did not “prove the Langlands program” yesterday.
[ "Grad school prospects" ]
[ "math" ]
[ "lwb3dq" ]
[ 1 ]
[ "" ]
[ true ]
[ false ]
[ 0.57 ]
null
Depends what you mean by “good school”. It’s probably better to speak in comparison to your current school. Realistically with a 3.5 GPA (it looks like no graduate courses) and no research, it might be possible to get into a less competitive program, but it’s very unlikely that you get in anywhere as good as your current place, and you won’t be admitted to anywhere better. I think if you want to improve your odds, you need to improve your grades both this semester and the coming fall semester, and in the middle try to get some research experience.
Unfortunately, your submission has been removed for the following reason(s): Career and Education Questions /r/matheducation If you have any questions, please feel free to message the mods . Thank you!
i have 3 graduate classes
Ok, nice. Provided you do some research this summer and get good grades in 1-2 more graduate courses next fall you probably have a reasonable chance at schools similar to your current one.
Okay. Are you applying to PhD programs? If that’s your goal, would you be okay with going to a masters program (some provide funding)?
[ "What is the next zero?" ]
[ "math" ]
[ "lwejef" ]
[ 0 ]
[ "Removed - incorrect information/too vague" ]
[ true ]
[ false ]
[ 0.43 ]
null
incredible feats of engineering that were done before this discovery To be clear, people understood quite a lot about forces, mechanisms, and quantitative relationships, they just didn't use our current place-value way of writing numbers. Roman numerals get a bad rap, but they are a very effective tool with some significant advantages over the positional notation system, especially for illiterate people (someone can learn Roman numerals in a small fraction of the time it takes to learn the Hindu–Arabic system). People did not use Roman numerals for arithmetic (or paper methods at all; paper hadn't been invented and alternate writing surfaces were expensive/cumbersome), but instead did calculations mentally, on fingers, or with a counting board. The counting board was positional, and had a 'zero' (empty position on the board). The best way to think of Roman numerals is as a serialization/record-keeping format, a direct written representation of the state of the counting board. People didn't think of geometry using quite the analytic coordinate method or vector notion commonly used today (developed in the 17th–19th century; though some of Apollonius's work is similar from 2000 years earlier), but that doesn't mean they didn't understand many important engineering principles. The biggest difference between ancient and modern engineering is probably that they used more expensive materials and over-engineered everything much more. But most of this is down to changes in metallurgy, chemistry, material science, and precision of machine tools. What I mean by this is, is there another possible big discovery that will change how we do math or are we pretty set with our system. Our formalisms for handling geometric relationships algebraically have a better unifying alternative invented ~150–100 years ago but still not in wide use. See http://geocalc.clas.asu.edu/pdf/OerstedMedalLecture.pdf
Also I'm probably way off talking quantum mechanics in math QM is much math that most physics students can't learn it all. That's not to shit on them. Neither can most math students.
Honestly, the idea that "binary" has anything at all to do with computing is pretty much a red herring. Computers could be done in any base, we just chose two cuz it was cheap and simple. There have been ternary computers before. Anyway, the main point is that a computers expressive power is pretty much unrelated to it's choice of representation. is binary not simply yes or no? I've heard that quantum computers have more processing powers because they can be both yes and no. Yeah, but that's again not unique to binary. For instance a quantum ternary bit (a qutrit?) can be 0 and 1 and 2. The thing that makes quantum special is the superposition of states, not the representation of base states.
I believe the only way our math can be considered obsolete is when machines become better than us. Our current math is so connected world-wide that a fundamental and better way of doing math is very unlikely to exist, unlike math 1000 years ago, where math was developed locally, and different people would have developed different concepts and missed some that other people invented Of course, we might have missed some basic stuff that would help us very much, but I think that's really unlikely edit: These important concepts like the 0 and the negative were developed in a time where there were very few "mathematicians". I'm not sure, but I would say that a year of today's math efforts would be equivalent to several years back then.
So a sample paragraph written in 200 years People back then understood quite a bit about quantum mechanics they just didn't have (technology), and the (system) to perform calculations with what they knew like we do now. This came off as snarky but I didn't know how else to say it haha. It's a weird thought but it's kinda interesting... Also I'm probably way off talking quantum mechanics in math as I have no clue what I'm talking about.