+1 on the whole references as a nice introduction. I think the authors overstate the preparation of their hypothetical "pedestrian" (either that or they need to get away from the physics department a bit more often), but a great reference nevertheless. I also got a lot out of sections of Nishimori's textbook [1]. In particular it helps motivate problems outside of physics and provides some references to start digging into more rigorous approaches via cavity methods (which I think, incidentally, are also more intuitive). I am a novice in this area but am sort of crossing my fingers that some of the ideas in this area will make their way into algorithms for inferring latent variables in some of the upcoming modern associative neural networks. What I mean here is that it would be cool not just to have an understanding of total capacity of the network but also correct variational approximations to use during training.
[1] https://neurophys.biomedicale.parisdescartes.fr/wp-content/u... [2] https://ml-jku.github.io/hopfield-layers/