Shouldn’t the priors be updated and improved each step to be closer to a good prior, and that’s why inaccurate priors may be acceptable? (Do BNNs not iteratively update the (next step’s) prior with the previous step’s posterior?) I haven’t worked on BNNs, but since Bayesian are always talking about updating their priors I thought this would be the case.
You are describing a recursive Bayesian approach, which can have significant computational and storage advantages for filtering (for example, Kalman filters). For this to work well, the prior must be able to adequately represent the learning of the posterior, which may be practical with a self-conjugate prior or a Monte Carlo approximation such as what particle filters use. In practice, for nontrivial machine learning applications, self-conjugate distributions rarely model the problem well and good approximations of the posterior into a concise prior are rarely practical.
Rationality: from AI to Zombies is largely based on Daniel Kahneman’s work, with a bit more influence from Judea Pearl’s causality work. The rationalist subculture has a writing style and trends that might be a bit polarizing, but I found it enjoyable. A good bit of it is about how to quantify uncertainty in real problems, and that this uncertainty only exists in the observer, not the universe. It also considers controversial examples like cryonics, race and politics.
light.co was mentioned on the corresponding reddit thread. I'm getting afraid I'm going to be burnt by laforge's shima glasses. They are 'super open' with updates showing disassembled hardware components but despite promising press reviews and shipments in Q1, I haven't seen a single hands-on demo or press review or evidence that even their proof of concept exists.