Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You totally can identify performance issues by reading code. E.g. spotting accidentally-quadratic, or failing to reserve vectors, or accidental copies in C++. Or in more amateur code (not mine!) using strings to do things that can be done without them (e.g. rounding numbers; yes people do that).

It's a lot easier and better to use profiling in general, but that doesn't mean I never see read code and think "hmm that's going to be slow".



Ok. I'll bite. How do you identify that a performance uplift of part of the code will kill the performance of overall app? Or won't have any observable effect?

I'm not saying you can't spot naive performance pitfalls. But how do you spot cache misses reading the code?


For example if someone uses a linked list where a vector would have worked. Vectors are much faster, partly due to better spatial locality.


Ok (that's a naive performance problem), and you speed that up, but now a shared resource is used mutably more often, leading to frequent locking and more overall pauses. How would you read that from your code?


Practitioners of this approach to performance optimization often waste huge swaths of their colleagues' time and attention with pointless arguments about theoretical performance optimizations. It's much better to have a measurement first policy. "Hmm that might be slow" is a good signal that you should measure how fast it is, and nothing more.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: