Most of what was listed above aren't the fundamentals which would help to properly understand what's happening with neural networks but rather completely different branches of machine learning that have little in common with neural networks. If they learn SVM, naive Bayes, and gradient boosting, then their knowledge will definitely be broader but just as shallow; using your analogy, it's not like trying to program in JS without understanding for loops but rather like trying to program in JS without understanding C, COBOL and Haskell.
I'm all for learning the fundamentals properly - but those fundamentals are going to be completely things, things like core principles of statistics (limitations of correlation, confounders, bias/variance, etc), the relevant parts of calculus and linear algebra that matter for understanding optimization, the best practices for management of data, experiments and measurement to not cheat yourself, etc - not the checklist of many different, parallel methods of machine learning like decision trees or reinforcement learning, which are both useful and interesting, but not related or required to properly apply e.g. transformer-based large language models for your task.
I'm all for learning the fundamentals properly - but those fundamentals are going to be completely things, things like core principles of statistics (limitations of correlation, confounders, bias/variance, etc), the relevant parts of calculus and linear algebra that matter for understanding optimization, the best practices for management of data, experiments and measurement to not cheat yourself, etc - not the checklist of many different, parallel methods of machine learning like decision trees or reinforcement learning, which are both useful and interesting, but not related or required to properly apply e.g. transformer-based large language models for your task.