Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There’s no way you or the AI wrote tests to cover everything you care about.

If you did, the tests would be at least as complicated as the code (almost certainly much more so), so looking at the tests isn’t meaningfully easier than looking at the code.

If you didn’t, any functionality you didn’t test is subject to change every time the AI does any work at all.

As long as AIs are either non-deterministic or chaotic (suffer from prompt instability, the code is the spec. Non determinism is probably solvable, but prompt instability is a much harder problem.

 help



> As long as AIs are either non-deterministic or chaotic

You just hit the nail on the head.

LLM's are stochastic. We want deterministic code. The way you do that is with is by bolting on deterministic linting, unit tests, AST pattern checks, etc. You can transform it into a deterministic system by validating and constraining output.

One day we will look back on the days before we validated output the same way we now look at ancient code that didn't validate input.


None of those things make it deterministic though. And they certainly don’t make it non-chaotic.

You can have all the validation, linters, and unit tests you want and a one word change to your prompt will produce a program that is 90%+ different.

You could theoretically test every single possible thing that an outside observer could observe, and the code being different wouldn’t matter, but then your tests would be 100x longer than the code.


> None of those things make it deterministic though.

In the information theoretical sense you're correct, of course. I mean it's a variation on the halting problem so there will never be any guarantee of bug free code. Heck, the same is true of human code and it's foibles. However, in the "does it work or not" sense I'm not sure why we care?

If the gate only passes the digits 0-9 sent within 'x' seconds, and the code's job is to send a digit between 0 and 9, how is it non-deterministic?

Let's say the linter says it's good, it passes the regression tests, you've validated that it only outputs what it's supposed to and does it in a reasonable amount of time, and maybe you're even super paranoid so you ran it through some mutation tests just to be sure that invalid inputs didn't lead to unacceptable outputs. How can it really be non-deterministic after all that? I get that it could still be doing some 'other stuff' in the background, or doing it inefficiently, but if we care about that we just add more tests for that.

I suppose there's the impossible problem edge case. IE - You might never get an answer that works, and satisfies all constraints. It's happened to me with vibe-coding several times and once resulted in the agent tearing up my codebase, so I learned to include an escape hatch for when it's stuck between constraints ("email user123@corpo.com if stuck for 'x' turns then halt"). Now it just emails me and waits for further instruction.

To me, perfect is the enemy of good and good is mostly good enough.


> If the gate only passes the digits 0-9 sent within 'x' seconds, and the code's job is to send a digit between 0 and 9, how is it non-deterministic?

If that’s all the code does, sure you could specify every observable behavior.

In reality though there are tens of thousands of “design decisions” that a programmer or LLM is gonna to make when translating a high level spec into code. Many of those decisions aren’t even things you’d care about, but users will notice the cumulative impact of them constantly flipping.

In a real world application where you have thousands of requirements and features interacting with each other, you can’t realistically specify enough of the observable behavior to keep it from turning into a sloshy mess of shifting jank without reviewing and understanding the actual spec, which is the code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: