One of the comments was "the Collatz conjecture feels like we're missing a branch of mathematics", or something to that effect. I followed that rabbit hole a bit and found the Plya conjecture[0], which was disproven when a counter example was found at approximately 10^361. Here I was naively thinking that if no counter examples were found in the first, say, 10^10 numbers, no counter examples should exist. If only it were so easy.
That's just the first counterexample that was proven to exist. The smallest counter example is less than 10^10 (in fact smaller than 10^9).
> The Pólya conjecture was disproved by C. Brian Haselgrove in 1958. He showed that the conjecture has a counterexample, which he estimated to be around 1.845 × 10^361.[3]
> An explicit counterexample, of n = 906,180,359 was given by R. Sherman Lehman in 1960;[4] the smallest counterexample is n = 906,150,257, found by Minoru Tanaka in 1980.[5]
EDIT: rephrased because despite having a maths degree I can't count.
One striking characteristic of Grothendieck’s mode of thinking is that it seemed to rely so little on examples. This can be seen in the legend of the so-called “Grothendieck prime”. In a mathematical conversation, someone suggested to Grothendieck that they should consider a particular prime number. “You mean an actual number?” Grothendieck asked. The other person replied, yes, an actual prime number. Grothendieck suggested, “All right, take 57.”
My hot take is that it should be called the L^(1/2) norm. Many theorems and formulas become a lot easier to state if you redefine L^p as L^(1/p).
For instance, under this notation, the dual of L^p is just L^(1-p). And Littlewood’s interpolation inequality is a lot easier to remember, since the exponents used come directly from the coefficients in the convex combination:
If r = ap + bq, where a and b are nonnegative and sum to 1, then |f|_r <= (|f|_p)^a (|f|_q)^b
Coldness handles 'negative temperatures' much better. As Wikipedia puts it:
> Though completely equivalent in conceptual content to temperature, β [= coldness] is generally considered a more fundamental quantity than temperature owing to the phenomenon of negative temperature, in which β is continuous as it crosses zero whereas T has a singularity.[7]
I recall the end of a science fiction novel, wherein we finally meet aliens. It turns out that, as a trend, the species that are good at counting are bad at math and especially counting things at a glance. The species that is best at math have a hard time telling the difference between two things and three things just looking.
I pursued and achieved an EE degree plus a couple extra math courses largely because I was "not good at math" as a youth. Nothing like hearing "I don't think math is your subject" from a key adult to light a fire under ones fanny.
In hindsight I could have had the same career path with a straight CS degree and much less stress during my college years. No regrets though, math really is fun! Even more so when the classroom setting is removed.
When I was about 12yo, I went to the eye doctor with my father (in France). The doctor realized that I had daltonism and told my father in front of me that I would never be an engineer because of that.
At school, we had tests to show what we were good at. The guy who came to comment on our tests told me that I should definitely go for something like literature or history.
And here I am, an engineer with an extra PhD in physics who dreams about meeting these idiots and explaining to them that what they do is revolting. They are probably dead by now though (it was in the 80's)
That reminds me of a parody song from 3Blue1Brown[0] about how certain numerical patterns don't quite hold and can fool you if you don't look far enough out.
There's a great Mathoverflow thread collecting examples of this sort of phenomenon (called "eventual counterexamples" in the thread), together with some interesting analysis (see Joel Hamkins' answer): https://mathoverflow.net/questions/15444/examples-of-eventua...
Okay but the Collatz conjecture is a little different in that there can't just be a one-off counterexample: it's a statement about a sequence. The counterexample would have to be either a cycle (that excludes 4/2/1), or a sequence of numbers that keep spiraling up indefinitely. And they've proven that any cycle would have to be very long[1]. Either way, it would mean trivially unlocking a sequence of numbers that happens to satisfy a very specific criterion.
Right. So there might be some set of numbers way out there that are like the “sporadic simple group(s)” of the collatz conjecture. Following the same rules as everyone else, you end up with something otherwise unclassifiable.
One of my favorite mathematical tangents involves a theorem of Littlewood.
The prime number theorem states that the prime counting function $\pi(x)$ is well approximated by the integral $\int_0^x \frac{dt} {\log t},$ which is a function that's become named $\mathrm{li}(x).$ Littlewood proved that the sign of $\pi(x) - \li(x)$ changes infinitely often, but his proof didn't produce a specific value where such a sign change occurs.
In 1933, one of Littlewood's students showed that, assuming the Riemann hypothesis, at least one sign change occurs below the number $e^e^e^79$, which is approximately $10^10^10^34.$ In 1955, he was able to show unconditionally that such a sign change occurs below the number $e^e^e^e^7.705$, which is approximately $10^10^10^964$.
Both of those numbers absolutely dwarf the estimated number of elementary particles in the observable universe, so it's utterly impossible to either compute or write them down, even if you used the entire universe as your computer or your writing surface.
I imagine it there might have been a bit more progress in the last 25 years, but I have no idea if it's possible to actually write down all the digits of the current best upper bound.
> That upper bound has since been lowered in 1999 to 1.38922 * 10^316 here: ...
>
> but I have no idea if it's possible to actually write down all the digits of the current best upper bound.
I'm probably missing something that should be obvious, but why wouldn't it be possible to write down 317 digits?
If it is "undecidable" this means there is no counterexample to the Collatz conjecture, since any counterexample would disprove it. But the Collatz conjecture does exactly state that there are no counterexamples. Which means: If it is undecidable, it is true.
Which seems a bit paradoxical. If you can prove that the Collatz conjecture is undecidable, you would also prove that it has no counterexamples, and thus that it is true. Which would make it decidable -- contradiction. So this seems to prove that if the Collatz conjecture is undecidable, this fact is itself also undecidable.
That is the case for something like Goldbach's Conjecture, which says that every even number > 2 is the sum of two primes. If it's false, then there is a counterexample, and it is easy to prove whether or not a given number is a counterexample (just loop over all pairs of smaller primes).
But that is not the case for the Collatz Conjecture. A Collatz counterexample could be a number whose orbit loops back around. That would be a provable counterexample. Another kind of Collatz counterexample would be a number whose orbit never terminates or repeats, it just keeps going forever. If such an infinite sequence existed, it might not be possible to prove that it's infinite. And if it isn't provable, then the conjecture would both undecidable and false.
I think you're being purposely dense. You pretend that it's unusual that a human would view a pattern that has occured 10 billion times and then expect that it would continue forever. But, I'll answer anyway.
It seems that for such a simple problem, involving basic arithmetic and small numbers, 1, 2, and 3, there should be a number N where if you try all examples less than N, you have sufficient "resolution" to reveal all patterns between the numbers. Maybe we could somehow say "if a counter-example exists, it must be smaller than N". I don't know what kind of math would allow us to formalize the possible patterns and what N is, I don't think it's been discovered yet. Maybe such math doesn't exist, but certainly we haven't discovered all mathematical tools for solving such problems, so maybe it does exist.
This kind of assumes that “A simple problem, involving basic arithmetic and small numbers, 1, 2, and 3” can be encoded into some bounded-state Turing machine.
if you have absolutely no grounding in maths or science, then sure, it would be unusual to think that way, but if you do, which you clearly did, then it should no longer be unusual whatsoever.
it would be trivial to artificially construct a series that equals 1 until n=10^10 at which point it equals pi/785498. if these series do exist, then - without additional information - why would you ever think that one you're studying is not one of them?
even from a metaepistemological standpoint, had you never come across the notion that, for example, Fermat's Last Theorem could not have been proven simply by showing that 10^10/infinity of the possible outcomes had been shown to follow it?
>It seems that for such a simple problem, involving basic arithmetic and small numbers, 1, 2, and 3, there should be a number N where if you try all examples less than N, you have sufficient "resolution" to reveal all patterns between the numbers
this analogy illustrates the opposite of what I think you intend it to. if we could artificially iterate the sun rising infinite times, somewhere very high up, maybe after 10^10, it eventually would no longer rise
True, and that did occur to me as I was writing it.
The point was more that any length of time greater than a few thousand years is as good as infinite for a human, so (for, you know, everyday purposes) we might as well assume that the sun will keep on rising forever.
Here's a fairly easily understood problem, which if you could solve it would make you famous in the mathematical world and win you a million dollar prize:
For a positive integer n:
Let H(n) = 1 + 1/2 + 1/3 + ... + 1/n
Let D(n) = the sum of the positive integers that divide n. E.g., D(12) = 1 + 2 + 3 + 4 + 6 + 12 = 28.
Prove or disprove that for any positive integer n > 1:
D(n) < H(n) + exp(H(n)) log(H(n))
That easy to understand problem turns out to be equivalent to the Riemann hypothesis [1], which is one of the most famous and important unsolved problems in number theory.
I've always felt that the Riemann hypothesis' specific formulation, in terms of the zeroes of zeta on the critical strip, is a very annoying way of expressing it. Like it is clearly related to the prime numbers in a really intrinsic way and therefore the most "intrinsic" expression of the problem shouldn't have anything to do with zeta or complex numbers or roots at all... whatever that is, I wish it's what we treated as the big unsolved problem instead of what we got.
I wonder what the most pleasing equivalent formulation of the RH would be. Gotta assume that some of them are much more directly about prime numbers. Yours seems pretty good. There are some others on here: https://mathoverflow.net/questions/39944/collection-of-equiv... which seem good also.
I’m bipolar and when I was first diagnosed with bipolar disorder I was 19 and in my first ever manic episode I dropped out of college and went deep down this particular rabbit-hole in an attempt to solve the Riemann Hypothesis to win the prize. I spent 3 months on it gradually driving myself crazier and crazier. My second company had collapsed out from under me when a sociopathic employee with a gambling addiction had conned me into thinking they were my best friend and meanwhile skimmed all of our profits so it looked like we were barely breaking even right under my nose. (To be fair I was 19) That in combination with a semester of Intro to Philosophy where I read Aristotle and Socrates and a Algebra class I became obsessed with led to me stumbling into the Riemann Hypothesis, and going deep down a rabbit hole in what ultimately turned into my first ever and most intense manic episode and led to my bipolar diagnosis.
I hadn’t even finished college Algebra at the time. I basically spent months playing around with equations on Wolfram Alpha.
The funny thing is this ultimately was kind of a life-changing moment for me, as I learned programming through it which in combination with a family member dying is what finally pulled me out of the mania, and then I changed my major to Computer Science re-enrolled in college and have since actually gone on to get a degree in CS and a minor in math, sometimes I look back at the giant set of Google Docs I created during that time, mostly in amusement but to this day some things I look at and don’t even understand if it is meaningless or not. A part of me wants to pick it back up, but then I tell myself it isn’t worth the risk, I’ve always felt I am uniquely qualified to understand how the guy in the movie “A beautiful mind” felt when he stopped talking/engaging with the hallucinations.
Here’s some links to random Google docs from my ~2017 rabbit hole if anyone is interested. There’s a lot of fairly interesting usage of the Euler-Mascheroni constant and honestly it goes a bit all over the place.
I kind of doubt there’s anything here, but if you’ve ever wanted to peek into unchecked mania.. have fun. It was months of insanity. (Sharing these in good faith!)
Thanks for being so candid. All I have to add is if you think a particular idea is driving you crazy or could be your next breakthrough, just let that idea simmer for sometime. You will get clarity over time or you will drop the idea completely. This strategy has worked out quite well for me.
The million dollar prize rules are surprisingly strict. They expect a "natural" proof, in that writing a one-page paper "my computer found counterexample 2^1729-1" may not get you the full prize, maybe instead a tiny acknowledgement prize, and they will wait for a proof of a generalization of the problem. https://www.claymath.org/wp-content/uploads/2022/03/millenni...
>> One annoying thing about both the Collatz and Goldbach conjectures is that if they are unprovable, it's also impossible to prove they're unprovable (unless math is inconsistent). This has been proved!
The author of the paper actually notes that a slightly different series is suspected to converge to $32/\pi^3$. The good news is that that series is an alternating series, so it's guaranteed to converge to something. But, it converges so slowly that, although the second partial sum differs from $32/pi^3$ by about 0.0004328909249, the 5000th partial sum differs by about 0.0004330860787. Those two numbers only start to differ at the 6th digit beyond the decimal point.
> One annoying thing about both the Collatz and Goldbach conjectures is that if they are unprovable, it's also impossible to prove they're unprovable (unless math is inconsistent). This has been proved!
So is there a countable order of metaprovability? i.e. are there problems that if they're impossible to prove, it's impossible to prove whether it's possible to prove whether they're unprovable, but _that_ can be proven? Or does that mean the same thing? What does it mean if that goes infinitely deep? That we'll never know anything about a problem in that set unless we prove it? I guess it's not possible to prove that a problem is in that set, right? Is it possible to prove that problems do exist in that set at all?
edit: (non-vacuously. Provable problems are obviously vacuously in all the sets).
edit2: oh I see now (I think). Above I'd need an "AND if you can't prove it undecideable ... then you can't prove its undecidability's decideability either, and that can be proved"
So basically each level needs to be conditioned on all 0..n-2 levels not holding, then you can prove level n-1 is undecideable.
level 0: provable
level 1: provably undecideable
level 2: if not provable (can't prove level 0) then can't prove undecideable. (Goldbach, Collatz)
level 3: if it's not in level 0 or 1, then can't prove level 2.
level 4: if it's not in levels 0, 1, or 2, then can't prove level 3.
...
Which means there's no level infinity, since there's no level infinity minus 1 to talk about so it's meaningless. Correct?
> level 2: if not provable (can't prove level 0) then can't prove undecideable. (Goldbach, Collatz)
If you're operating in the ordinary naturals, neither Goldbach nor Collatz are undecidable. The statements are either true or false. The question at hand is whether we can generate a finite proof of that fact in a given axiomatic system. The quote at hand simply said that if no such proof exists (obviously implying the statements are true because otherwise a simple finite proof would be a counter-example) then similarly no finite proof of that lack of existence exists either.
They might be undecidable in other arithmetic systems (implying that some models of those systems would have the statements be true and some would have them be false), but not for the naturals.
> Which means there's no level infinity, since there's no level infinity minus 1 to talk about so it's meaningless. Correct?
When you're making up a new definition, you care about (1) is it coherent (don't want to be like the proverbial Ph.D. who made up an exciting mathematical object and studied it for years before finding out no such objects could exist), (2) what can you deduce about that object, and (3) is it "useful" (for some broad definition thereof).
The question of "meaningfulness" can lead you down incorrect paths because it merges (2) and (3). The definition you picked for the broad concept you have in your mind has no level "infinity" because you snuck the naturals into the definition as the indexing set (defined using the successor function). If we strictly look at point (2), yes, there's no "infinity".
The general concept you're looking at though might more naturally fit in a world where there are infinities (i.e., let's look at point (3) a bit more). As a rule of thumb, if you're looking at a set of things indexed by the naturals, especially if they have a successor function, especially if they're defined inductively, especially if their interesting properties are defined in terms of all previous items, it's natural to take a look at indexing via the ordinals and trying to define them via transfinite induction.
For this particular set of things, that may or may not be straightforward. The notions of unprovability, undecidability, ... are a bit intricate, and you need to define them with respect to the system being studied and the system being used to study them. Right now, your inductive definition _seems_ to also require a predecessor function (which, if intrinsic to the property being analyzed would preclude many attempts to wrangle in some infinities), but I wouldn't be surprised if a careful re-writing found that to be an extraneous detail, in which case you could add in a limiting case (defining f(w) in terms of f(n) for all n<w for all limit ordinals w).
Fwiw I'm going back to my original hypothesis and saying there's no limit at infinity, but there is a class of questions that are in none of the levels, essentially meaning you can't prove anything about their solvability. It's impossible to prove that any particular question is in that class, by definition, but it may be possible to prove that there exist questions that are. Which opens the philosophical question of why so many questions aren't.
I'm sorry, but who needs this? If the answer is "we will discover new math methods trying to solve that puzzle", isn't it better to discover new math methods trying to solve something useful?
This may be a question that misses the point - respectfully, what are some practical applications in physics or engineering for such proofs and/or the search for a conjecture counterexample?
Nothing wrong with asking the question. There might not be any practical applications in those fields, and it could still be a worthwhile, interesting, fun question.
But in this case, I'd bet that developing the math needed to solve it would end up with practical consequences. I think of your question sort of like "what are the practical applications of teaching an AI to master Starcraft?" Even if playing Starcraft isn't practical, the existence of such an AI implies lots of practical consequences.
Ah I see. As we evolve our understanding and toolset of certain fields in math, we may serendipitously happen upon approaches to solve existing problems or solve future ones, perhaps.
Sorry for the cliché, but "the most exciting phrase to hear in science, the one that heralds the most discoveries, is not "Eureka!" (I found it!) but 'That's funny...'" --Asimov
One thing that makes it an appealing problem to solve is its apparent simplicity. It’s almost embarrassing that our modern mathematical technology isn’t adequate to resolve such an elementary question.
…or maybe the fact that it proves so hard to answer shows that we’re just wrong for thinking that it’s ‘elementary’. Therefore, any solution would necessarily require new breakthroughs that would lead to greater general understanding. So:
1) mathematicians would like to save face
2) you can’t solve difficult problems without accidentally solving other difficult ones that turn out to have immediate ‘practical’ importance
I do believe it is worth doing in and of itself. So my question is not trying to get a gotcha if not supplied with a practical application. I think I could certainly improve my intuition in theoretical math fields.
And a telephone company wanted to connect calls more easily, so they invented the transistor, which changed the world in unimaginably many ways. Sometimes you just don’t know where you’ll end up until you try it.
I can’t tell if you’re just joking. Proofs of facts that seem obvious or irrelevant are very important not for the yes-no question they answer but for the development of the deep mathematical theory they necessitate.
It's not that mind-boggling. In general we need to use specific properties of numbers to prove whether they are irrational, and pi+e has very few useful properties to work from. Even proving pi is irrational is not trivial and it is one of the transcendental numbers we know the most about.
> we need to use specific properties of numbers to prove whether they are irrational
Indeed, one could say that all we know about numbers are their properties. A number is just an existence assertion about an object fulfilling some property (possibly uniquely).
I love how these raw mathematicians consider something proved when they can understand, meanwhile the computer can prove it easily just by counting a finite number of bits. What exactly would be considered proof in this case? Any explanation only mathematicians can understand?
Well, how does your proposed computer proof look like? A computer can easily calculate both sides of the equation up to say, float precision. But that's not a proof; it only tells you that both numbers are near each other!
The first thing you need to go with a series like this is prove that it converges.
Then you can take a series that is already known to converge to pi. The choice of series will mean the difference between the proof being very hard and practically impossible.
Then change that series to give 32/pi^3 instead of pi.
Then deduct a mapping between groups of n in one series and groups of n in the other series.
> the computer can prove it easily just by counting a finite number of bits
Did you miss the infinite sum there? How would you prove an infinite sum equals a transcendental number by counting finite bits? You'd have to count infinite bits.
Wouldn’t you be able to see the difference converging on zero at least? Unless it oscillates all over but seemed to average. I don’t know if the squeeze theorem applies.
[0]: https://en.wikipedia.org/wiki/P%C3%B3lya_conjecture