Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not a luddite, if the software is ready and safer than people then I'd be okay with it.

There's a history of software not being ready while people pretend it is and then it kills people (Therac-25).

I'm just skeptical that we'll know when it's actually safe.



Therac-25 wasn’t a “it’s not ready yet!”-type issue. It wasn’t an expected or anticipated failure-mode - it only became a (literal) textbook case-study after people died and the industry has learned and improved as a consequence.


They ignored repeated failures and evidence of malfunction by saying it was “impossible” that it could be failing in that way.

Unexpected failure modes are the issue. The Boeing 737 max 8 failure being tied to one sensor would suggest the industry has not fully learned the lesson.


My understanding is that it was a UX issue - the "malfunctioning" was the system working as-directed by the user, but the UX was horrible for informing the user what they were doing.


It was a lot worse than that: http://sunnyday.mit.edu/papers/therac.pdf

That paper is long, but does a great job of giving the context and going deep on the details.


Thank you!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: