I agree, this statement seems like the most interesting one in the essay.
It reminds me of expected value calculations: extremely low probability events of tremendously high impact could actually out-rate moderately probable events of moderate impact in terms of how concerned you should be with them. Assigning meaningful probabilities and impact values can be tricky though, which adds a whole layer of complexity and hand-waving to the problem.
This is where discussions of nuclear war, strong AI, nazi-oriented political resurgences, geothermal storms, epidemics, etc. can really go off the rails in my opinion: two people can separately evaluate "worst-case scenario" as very different realities, assigning very different intuited impact-values, and also assign very different probability values. Even if they think about these things from an expected-value perspective, a discussion between them becomes at least a 2-variable linear equation, and the odds of even understanding each other are stretched, let alone finding the same x and y and agreeing upon what that means for the actions we should take in response.
Graham's point here on the positive side though is a refreshing step outside of that domain. An idea recycled, if broadly applicable enough, only needs a hint of novelty. It gives me some hope for being able to come up with useful things.
It reminds me of expected value calculations: extremely low probability events of tremendously high impact could actually out-rate moderately probable events of moderate impact in terms of how concerned you should be with them. Assigning meaningful probabilities and impact values can be tricky though, which adds a whole layer of complexity and hand-waving to the problem.
This is where discussions of nuclear war, strong AI, nazi-oriented political resurgences, geothermal storms, epidemics, etc. can really go off the rails in my opinion: two people can separately evaluate "worst-case scenario" as very different realities, assigning very different intuited impact-values, and also assign very different probability values. Even if they think about these things from an expected-value perspective, a discussion between them becomes at least a 2-variable linear equation, and the odds of even understanding each other are stretched, let alone finding the same x and y and agreeing upon what that means for the actions we should take in response.
Graham's point here on the positive side though is a refreshing step outside of that domain. An idea recycled, if broadly applicable enough, only needs a hint of novelty. It gives me some hope for being able to come up with useful things.