Stuff like AI alignment is why I'm less interested in the EA groups nowadays tbh. Too much preference is given to these sci-fi ideas over rational known solutions to real issues. I also find it's not CS guys talking about these issues but philosophers and I don't believe you can weigh up risk/reward of issues in the mid-far future off of peoples views who don't understand (even theoretical) implementation details. I still fully agree with many other parts of EA though, like giving what we can and focusing on effective charities.