This is a valid point missed by many today. The mantra of don't optimise early is often used as an excuse to not optimise at all, and so you end up with a lot of minor choices scattered throughout the code with all suck a tiny bit of performance out of the system. Fixing any of these is also considered to be worthless, as the improvement from any one change is miniscule. But added up, they become noticeable.
> Is it because I made hundreds decisions like that? Yes.
Proof needed. Perhaps your overall program is designed to be fast and avoid silly bottlenecks, and these "hundred decisions" didn't really matter at all.
But do you have actual proof for your first claim? Isn't it possible that the "constant vigilance" is optimizing that ~10% that doesn't really matter in the end?
For example C++ can shoehorn you to a style of programming where 50% of time is spent in allocations and deallocations if your code is otherwise optimal.
The only way to get that back is not to use stl containers in ”typical patterns” but to write your own containers up to a point.
If you didn’t do that, youd see in the profiler that heap operations take 50% of time but there is no obvious hotspot.
I wrote it to share my implementation and my experience with it.
SumatraPDF compiles fast (relative to other C++ software) and is smaller, faster and uses less resources that other software.
Is it because I wrote Func0 and Func1 to replace std::function? No.
Is it because I made hundreds decisions like that? Yes.
You're not wrong that performance wins are miniscule.
What you don't understand is that eternal vigilance is the price of liberty. And small, fast software.