Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I didn't say "it's too late to do anything" I said "it's impossible to do enough".

From your book link, imagine this:

"Dear Indian Government, please ban AI research because 'Governments will take radical actions that make no sense to their own leaders' if you let it continue. I hope you agree this is serious enough for a complete ban."

"Dear Chinese Government, are you scared that 'Corporations, guided by artificial intelligence, will find their own strategies incomprehensible.'? Please ban AI research if so."

"Dear Israeli Government, techno-powerhouse though you are, we suggest that if you do not ban AI research then 'University curricula will turn bizarre and irrelevant.' and you wouldn't want that to happen, would you? I'm sure you will take the appropriate lawmaking actions."

"Dear American Government, We may take up pitchforks and revolt against the machines unless you ban AI research. BTW we are asking China and India to ban AI research so if you don't ban it you could get a huge competitive advantage, but please ignore that as we hope the other countries will also ignore it."

Convincing, isn't it?



Where, specifically, in the book do you see the author advocating this sort of approach?

The problem with "it's impossible to do enough" is that too often it's an excuse for total inaction. And you can't predict in advance what "enough" is going to be. So sometimes, "it's impossible to do enough" will cause people to do nothing, when they actually could've made a difference -- basically, ignorance about the problem can lead to unwarranted pessimism.

In this very subthread, you can see another user arguing that there is nothing at all to worry about. Isn't it possible that the truth is somewhere in between the two of you, and there is something to worry about, but through creativity and persistence, we can make useful progress on it?


I see the book-website opening with those unconvincing scaremongering scenarios and it doesn't make me want to read further. I think there is something to worry about but I doubt we can make useful progress on it. Maybe the book has suggestions but I think we cannot solve the Collective Action problem[1]. The only times humans have solved the collective action problem at world scale is after the damage is very visible - the ozone layer with a continent sized hole in it and increasing skin cancer. Polio crippling or killing children on a huge scale. Hiroshima and Nagasaki demonstrating the power of nuclear weapons - and the solution is simple, fund Polio vaccine, ban one specific chemical, agree not to develop Uranium enrichment plants which could fuel nuclear weapons which are generally large and internationally visible. Even problems with visible damage are no guarantee, coal power plants kill people from their emissions, combustion vehicles in cities make people sicker, increasing extreme weather events hasn't made people cooperate on climate change issues. If actual problems aren't enough, speculative problems such as AI risk are even less so.

Add to that backdrop that AI is fun to work on, easy and cheap to work on and looks like it will give you a competitive advantage. Add to that the lack of clear thing to regulate or any easy way to police it. You can't ban linear algebra and you won't know if someone in their basement is hacking on a GPT2 derivative. And again, everyone has the double interest to carry on their research while pretending they aren't - Google, Microsoft/OpenAI, Meta VR, Amazon Alexa, Palantir crime prediction, Wave and Tesla and Mercedes self-driving, Honda Asimov and Boston Dynamics on physicality and movement, they will all set their lawyers arguing that they aren't really working on AGI just on mathematical models which can make limited predictions in their own areas. nVidia GPUs, Apple and Intel and AMD integrating machine learning acceleration in their CPU hardware, will argue that they are primarily helping photo tagging or voice recognition or protecting the children, while they chip away year after year at getting more powerful mathematical models integrating more feedback on ever-cheaper hardware.

[1] https://en.wikipedia.org/wiki/Collective_action_problem




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: