As a long time embedded programmer, I don't understand this. Even 20 years ago, there is no way I really understood the machine, despite writing assembly and looking at compiler output.
10 years ago, running an arm core at 40 Mhz, I barely had the need to inspect my compiler's assembly. I still could roughly read things when I needed to (since embedded compilers tend to have bugs more regularly), but there's no way I could write assembly anymore. I had no qualms at the time using a massively inefficient library like arduino to try things out. If it works and the timing is correct, it works.
These days where I don't do embedded for work, I have no qualms writing my embedded projects in micropython. I want to build things, not micro optimize assembly.
> As a long time embedded programmer, I don't understand this
I think you both should define what your embedded systems look like. The range is vast after all. It ranges from 8 bit CPU [0] with a few dozen kilobytes of RAM to what almost is a full modern PC. Naturally, the incentives to program at a low level are very different across the embedded systems range.
I was trying to bit-bang 5 250KHz I2C channels on a 16MHz ATTiny while acting as an I2C slave on a 6th channel.
This is really not something you can do with normal methods, the CPU is just too slow and the assembly is too long. No high level language can do what I want because the compiler is too stupid. My inline assembly is simple and fast enough that I can get the bitrate I need.
In my view, there's two approaches to embedded development: programming á la mode with arduino and any unexamined libraries you find online, or the register hacker path.
There are people who throw down any code that compiles and moves on to the next thing without critical thought. The industry is overflowing with them. Then there are the people who read the datasheet and instruction set. The people painstakingly writing the drivers for I2C widgets instead of shoving magic strings into Wire.Write.
I enjoy micro-optimizing assembly. I find it personally satisfying and rewarding. I thoroughly examine and understand my work because no project is actually a throwaway. Every project I learn something new, and I have a massive library of tricks I can draw from in all kinds of crazy situations.
If you did sit down to thoroughly understand the assembly of your firmware projects, you'd likely be aghast at the quality of code you're blindly shipping.
All that aside for a moment, consider the real cost of letting your CPU run code 10x slower than it should. Your CPU runs 10x longer and consumes a proportional amount of energy. If you're building a battery powered widget, that can matter a lot. If your CPU is more efficient, you can afford a lighter battery or less cooling. You have to consider the system as a whole.
This attitude of "ship anything as quickly as possible" is very bad for the industry.
10 years ago, running an arm core at 40 Mhz, I barely had the need to inspect my compiler's assembly. I still could roughly read things when I needed to (since embedded compilers tend to have bugs more regularly), but there's no way I could write assembly anymore. I had no qualms at the time using a massively inefficient library like arduino to try things out. If it works and the timing is correct, it works.
These days where I don't do embedded for work, I have no qualms writing my embedded projects in micropython. I want to build things, not micro optimize assembly.