I think the evidence shows that "fast" software is not actually desired by the majority of companies, otherwise they would put more emphasis on using compiled languages, optimisation and reducing technical debt. Sure, everybody will take a speed improvement if it's free but most companies will go for predictable, maintainable and error free software over speed every time.
A lot of that maintainability and lack of errors will come from proper design, having a good design where the complex problem is partitioned into many really simple problems, resulting in easily testable code. And that code is easy to optimize, because it's easy to make changes when you have a decent test support.
Now this software won't be as fast as what's possible, if that is needed the optimizations will turn it ugly again in many cases, but it will be decently fast. This all starts with good design, proper intuition about which algorithms to use, and a good dose of creative problem solving.
I spun up a very simple rest API thatreturned an input parameter, and ran it under load, using asp.net and express js. There wasn't any architecture or design, it was one function. Node/express had 10x less throughput than the asp.net version, with 99% being multiple orders of magnitude larger than the asp.net version.
Of course you can hide the biggest complexity behind a single function.
What I was getting at is that if I see code with a lot of duplication, often it's not only the code that's duplicated, but the runtime as well. Then you have people using nested for loops where a dictionary would do to look it up.
Things like this, in the same language and same framework, make things on average faster, and simpler.
And if that piece of badly architected code is fragile and breaks on every change, you will stop trying to find even the low hanging fruit of optimizations that usually pay off, because every change affects multiple different files and you have no tests to see quickly if it still works.
Thinking about your reply again, it's actually the perfect example of this. Since it was a one-liner, it was so simple that it was easy to exchange it for a different one liner with a different framework just to see if it's faster. If you had to spend weeks on replacing frameworks you wouldn't have been able to do this low-hanging fruit of an optimization.
You've totally missed my point here - the fact is that no amount of low hanging fruit optimisations in the node app, or architectural improvements, or clean code will _ever_ close that 10x gap. Almost all of the decisions you make after the fact (e.g. DRY, DI, composition, and even in many cases data structures) are meaningless on both a macro and micro level compared to the very basic decisions you make - what does it run on.
What matters for companies is whether they can accomplish tasks faster than they can do them today. And it's rare that the difference between executing Python or C code is the bottleneck in that process.