I think you can reframe this debate pragmatically and widen its applicability significantly: At what point is "bad" code more effective than the alternatives. If you get down into a debate about "best practices" you'll have to concede that anyone writing the code the author is talking about might be using "best practices" in some explicit way, but isn't "following best practices", which are designed to avoid precisely the difficulties he outlines. On the other hand, it's true that most code out there is bad code, and that heavily architecting a system with bad code can be even more of a nightmare than more straightforward bad code. The real question is, when should scientists favor bad code? I'm a huge fan of best practices and of thoughtful and elegant coding, but I could see an argument being made that in most circumstances, scientific code is better off being bad code, as long as you keep it isolated. I'd love to see someone make that argument.