It is for many problems, especially concurrency related ones, much less powerful than trace points. But the issue I have seen is that some tools like gdb have unergonomic support for tracing so there I tend to use break points or printf debugging just because the tracing support is so bad in gdb.
There is a good argument for never using debuggers except for core development- Once finished your logs/metrics/events should be good enough to understand what is happening in an application. If debugging your application requires breakpoints you wont really be able to debug a live instance, and wont be able to easily signal off what is happening in the future.
That is a reasonable argument - but it was not made in the article and also does not preclude the use of breakpoints (see your except clause which covers a lot of ground).
I'm curious about what industry you are in and the tech stack you are using?
reply