If the difficulty of the LC questions were calibrated for the role, I'd agree. But the times I've run into LC it was rarely the "easy" questions. Instead I've seen LC questions thrown at me that would take someone already familiar with the solution a couple hours to implement, but I'm going into it cold and am given 30 minutes with someone watching me. Or worse, I've seen LC questions that were once the basis for someone's CS PHD. It might have taken Dijkstra 20 minutes to come up with the algorithm he is most famous, but you're not interviewing Edsger Dijkstra here.
I saw a picture a couple of weeks ago about a person on Twitter who lost around 90% of his followers because they were bots. He complained that he had spent the last +7 years building up this "community", which in the end consisted of bots.
As late yesterday I saw two youtube shorts videos using deep fakes models of celebrities pushing crypto Ponzi schemes. I wonder how widespread bot usage is, and their connection to "echo chambers" that seem to echo louder and louder into mainstream media.
Have you experienced a caveat in terms of coding quality?
I study mathematics, and I think it is pretty standard for mathematicians to disregard theoretical run times when experimenting and doing "napkin" computations. Invariably this leads to relatively poor code quality. Speaking from experience, I just failed a coding interview because I solved the question like a mathematician would do, i.e. a quick and dirty way. What is your experience with this phenomenon? I know academics often gets railed lacking concepts in data structures etc.