I don't think that's a great example. If Kahneman claimed not to be susceptible, it would have greatly undermined his claims about the universality of these phenomena: many other people would presumably also not be susceptible.
If I remember correctly I took the interviewer's question to mean "now that you're aware of these cognitive biases are you still affected by them?" not "do you experience cognitive biases?". I don't see the first question at odds with the universality claim. The latter would be.
I think you're misunderstanding the point this paper is trying to make. They're interested in trying to distinguish whether AI is capable of solving new math problems or only capable of identifying existing solutions in the literature. Distinguishing these two is difficult, because self-contained math problems that are easy enough for LLMs to address (e.g. minor Erdos-problems) may have been solved already as subcomponents of other work, without this widely known. So when an AI makes progress on such an Erdos problem, we don't know if it had a new idea, or correctly identified an existing but obscure answer. This issue has been dogging the claims of AI solving Erdos problems.
Instead, here you get questions that extremely famous mathematicians (Hairer, Spielman) are telling you (a) are solvable in <5 pages (b) do not have known solutions in the literature. This means that solutions from AI to these problems would perhaps give a clearer signal on what AI is doing, when it works on research math.
I find it unbelievable that this question can't be settled themselves without posting this simply by asking the AI enough novel questions. I myself have little doubt that at least they can solve some novel questions (of course similarity of proofs is a spectrum so it's hard to draw the line at how original they are)
I settle this question for myself every month: I try asking ChatGPT and Gemini for help, but in my domains it fails miserably at anything that looks new. But, YMMV, that's just the experience of one professional mathematician.
You're wrong. The mistake could have been unfixable. That happens quite frequently (see: countless retracted claimed proofs of major results by professional mathematicians).
The thought police already arrived, see Columbia grant cancellations and Mahmoud Khalil [1].
[1] "Khalil is a “threat to the foreign policy and national security interests of the United States,” said the official, noting that this calculation was the driving force behind the arrest. “The allegation here is not that he was breaking the law,” said the official." https://www.thefp.com/p/the-ice-detention-of-a-columbia-stud...
It's nice to live in a world where actions have consequences. When the media coverage got too much, Marc Tessier-Lavigne finally had to resign as president of Stanford, so he could focus on his job as a Stanford professor.
I can't tell whether your post is a joke. Yes, Tessier-Lavigne was forced to resign. But Stanford let him stay on as a professor. That was terrible: they should have kicked him out of the university.
I'm no expert, but I suspect it is a longer process to remove someone from a tenured professor position, than to remove them as President. We don't know that they won't eventually happen.
There are betrayals so severe that a grindingly slow due process is actually itself an addition betrayal. Not arguing for a kangaroo court, but tenure should not be a defense for blatant cheating.
Interestingly, the asymptotically fastest known algorithm for minimum weight bipartite matching [A] uses an interior point method, which means it's also doing Riemannian optimization in some sense.
>>>
Jonathan Friedman, Sy Syms director of PEN America’s U.S. Free Expression programs, said:
“The irony cannot be lost here: government officials have used their positions to muscle out a scholar of authoritarianism from a prestigious lecture,"
<<<
That doesn't really change the fact that it's exhausting (and worse, "commercially offputting") to be reminded that we're careening towards the worst futures literally imagined. I stayed away from Soylent and I'll probably stay away from this, but thanks for the head's up. rimshot
As big PKD fans, that definitely flew over our heads a bit. Can def understand that view and understand why its is commercially exhausting especially because we agree that we are heading toward some of the worst futures possible, so did PKD. We definitely build with this in mind!
But, the starting point of Neural Networks in the ML/AI sense, is cybernetics + Rosenblatt's perceptron, research done mathematicians (who became early computer scientists)
That's why I wrote that it was unexpected.I'm not taking position of if this was deserved or undeserved, but this was clearly in the realm of physics and inspired by it.
Accepting wrong arguments in support of positions you have is not good way to live your life. It leads to constipation.
reply