When asked to prove it, it spelled out the letters one by one, and still failed (ChatGPT asserted the answer is still 2, Claude “corrected” itself to 1). Only when forcing it to place a count beside each letter did it get it correct.
It’s not really about the specific question, that just highlights that it does not have the ability to comprehend and reason. It’s a prediction machine.
If it cannot decompose such a simple problem, then how can it possibly get complex programming problems that cannot be simply pattern matched to a solution correct? My experience with ChatGPT, Claude, and copilot writing code demonstrates this. It often generates code that on the surface level looks correct, but when tested it either fails outright or subtly fails.
Even things like CSS it gets wrong, producing output that on the surface seems to do what you asked but in fact doesn’t actually style it correctly at all.
Its lack of ability to understand, decompose, and reason is the problem. The fact that it’s so confident even when wrong is the problem. The fact that it cannot detect when it doesn’t know is the problem.
It generates text that has high probability of “looking” correct, not text that has a high probability of being correct. With simple questions like the one I posed, it’s obvious to us when it gets it wrong. With complex programming tasks, the solution is complex enough that it often takes significant effort to determine if it’s correct or wrong. There’s more room for it to “look” correct without “being” correct.
> But if you've never tried GitHub Copliot
I’ve used it for almost a year before I cancelled my subscription because it wasn’t adding much value. I found copilot chat a bit more useful, but ChatGPT was good enough for that. I still use ChatGPT when programming: as a tool to help with documentation (what’s the react function to do X, type questions), to rubber duck, to ask for pros and cons lists on ideas or approaches, and to get starting points. But never to write the code for me, at least not without the expectation of significant rewriting, unless it’s super trivial (but then I likely would have written it faster myself anyway).
Thanks for taking the time to answer so thoroughly :)
In that case I stand corrected, I'd just assumed you hadn't used Copilot because, to me, it was so more effective at aiding with programming that ChatGPT. But I suspect that very much depends on the use-case. I liked it a lot for e.g. writing numpy code, where I'd have had to look up the documentation on every function otherwise, or for writing database migrations by hand, where the patterns are very clear, and in those situations it felt like a huge time-saver. But for other applications it didn't help at all, or admittedly even introduced subtle bugs that were fun to find and fix.
After my free year of Copilot ran out I also didn't re-subscribe, because at this point I have too many AI-related subscriptions as it stands, but I'd definitely (carefully) use it if I had access to it via an org or an OS project.
To be completely fair, there are some things I did have success with getting code generated. For example, I made a little python script to pull fields out of TOML files and converted them to CSV (so that I could import the data into a spreadsheet). It did mostly ok on this (in that I didn’t have to edit the final code that much and it was in fact faster than writing it all myself).
But the cases where I find its code was good enough are 1) fairly easy tasks (ie I don’t need AI to do it, but it still saved some time), and 2) not that common for the type of development I’ve been mostly doing. The problem is that I’ve often wasted significant time to figure out whether or not it’s one of these tasks, so in the long run it just doesn’t feel that useful to me as a “write code for me” tool. But as I said, I do find AI a useful aid, just not to write my code for me.
It’s not really about the specific question, that just highlights that it does not have the ability to comprehend and reason. It’s a prediction machine.
If it cannot decompose such a simple problem, then how can it possibly get complex programming problems that cannot be simply pattern matched to a solution correct? My experience with ChatGPT, Claude, and copilot writing code demonstrates this. It often generates code that on the surface level looks correct, but when tested it either fails outright or subtly fails.
Even things like CSS it gets wrong, producing output that on the surface seems to do what you asked but in fact doesn’t actually style it correctly at all.
Its lack of ability to understand, decompose, and reason is the problem. The fact that it’s so confident even when wrong is the problem. The fact that it cannot detect when it doesn’t know is the problem.
It generates text that has high probability of “looking” correct, not text that has a high probability of being correct. With simple questions like the one I posed, it’s obvious to us when it gets it wrong. With complex programming tasks, the solution is complex enough that it often takes significant effort to determine if it’s correct or wrong. There’s more room for it to “look” correct without “being” correct.
> But if you've never tried GitHub Copliot
I’ve used it for almost a year before I cancelled my subscription because it wasn’t adding much value. I found copilot chat a bit more useful, but ChatGPT was good enough for that. I still use ChatGPT when programming: as a tool to help with documentation (what’s the react function to do X, type questions), to rubber duck, to ask for pros and cons lists on ideas or approaches, and to get starting points. But never to write the code for me, at least not without the expectation of significant rewriting, unless it’s super trivial (but then I likely would have written it faster myself anyway).
Basically, I use it like this person does: https://news.ycombinator.com/item?id=41350207