I think this is going to look a lot like the same problem in education, where the answer is that we will have to spend less time consuming written artifacts as a form of evaluation. I think effective code reviews will become more continuous and require much more checking in, asking for explanations as the starting point instead of "I read all of your code and give feedback." That just won't be sustainable given the rate at which text can now be output.
AI creates the same problem for hiring too: it generates the appearance of knowledge. The problem you and I have as evaluators of that knowledge is there is no other interface to knowledge than language. In a way this is like the oldest philosophy problem in existence. Socrates spent an inordinate amount of time railing against the sophists, people concerned with language and argument rather than truth. We have his same problem, only now on an industrial scale.
To your point about tests, I think the answer is to not focus on automated tests at first (though of course you should have those eventually), but instead we should ask people to actually run the code while they explain it to show it working. That's a much better test: show me how it works, and explain it to me.
Evaluating written artifacts is broken in education because the end goal of education is not the production of written artifacts - it is the production of knowledge in someone’s mind and the artifacts were only intended to see if that knowledge transfer had occurred. Now they no longer provide evidence of that. A ChatGPT written essay about the causes of the civil war is not of any value to a history professor, since he does not actually need to learn about the civil war.
But software development is about producing written artifacts. We actually need the result. We care a lot less about whether or not the developer has a particular understanding of the world. A cursor-written implementation of a login form is of use to a senior engineer because she actually wants a login form.
I think it's both actually, and you're hitting on something I was thinking of while writing that post. I'm reading "The Perfectionists," which is about the invention of precision engineering. It had what I would consider three aspects, all of which we should care about:
1. The invention of THE CONCEPT BEHIND THE MACHINE. In our context, this is "Programming as Theory Building." Our programs represent some conception of the world that is NOT identical to the source code, much the way early precision tools embodied philosophies like interchangeability.
2. The building of the machine itself, which has to function correctly. To your point, this is one of the major things we care about, but I don't agree it's the only thing. In the code world this IS the code, to your point. When this is all we think about, though, I think you get spaghetti code bases and poorly trained developers.
3. Training apprentices in both the ideas and the craft of producing machines.
You can argue we should only care about #2, many businesses certainly incentivize thinking in that direction, but I think all 3 are important. Part of what makes coding and talking about coding tricky is that written artifacts, even the same written artifacts, express all 3 of these things and so matters get very easily confused.
This is a key difference, but I think it plays less of a role than it initially appears because growing knowledge of employees helps building better artifacts faster (and fixing them when things go wrong). Short term, the login form is desired. But long term, someone with enough knowledge to support the login form, for when the AI doesn't quite get it all right, is desired.
> instead we should ask people to actually run the code while they explain it to show it working. That's a much better test: show me how it works, and explain it to me.
There’s a reason no one does it. Because it’s inefficient. Even in recorded video format. The helpful things are tests and descriptives PRs. The former because its structure is simple enough that you can judge it, and the test run can be part of the commit. The second is for the simple fact that if you can write clearly about your solution, I can the just do a diff of what you told me and what the code is doing, which is way faster than me trying to divine both from the code.
> asking for explanations as the starting point instead of "I read all of your code and give feedback." That just won't be sustainable given the rate at which text can now be output.
I claim that this approach is sustainable.
The idea behind the "I read all of your code and give feedback." methodology is that the writer really put a lot of deep effort into making sure that the code is of great quality - and then he is expecting feedback, which is often valuable. As long as you can with some effort find out by yourself how improvements could be done, don't bother asking for someone else's time/
The problem is thus that the writers of "vibe-generated code" hardly ever put such a deep effort into the code. Thus the code is simply not worth asking feedback for.
I think asking people to explain is good, but it's not scalable. I do this in interviews when I suspect someone is cheating, and it's very easy to see when they've produced something that they don't understand. But it takes a long time to run through the code, and if we had to do that for everything because we can't trust our engineers anymore that would actually decrease productivity, not increase it.
AI creates the same problem for hiring too: it generates the appearance of knowledge. The problem you and I have as evaluators of that knowledge is there is no other interface to knowledge than language. In a way this is like the oldest philosophy problem in existence. Socrates spent an inordinate amount of time railing against the sophists, people concerned with language and argument rather than truth. We have his same problem, only now on an industrial scale.
To your point about tests, I think the answer is to not focus on automated tests at first (though of course you should have those eventually), but instead we should ask people to actually run the code while they explain it to show it working. That's a much better test: show me how it works, and explain it to me.