Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think so.

There's a difference between expert systems and black boxes.

Black boxes are problematic in domains where what matters isn't just your decision today, but the evolution of your decision making process.

Easy examples being finance, medicine, legal, education etc.

In these areas, when you are explicitly weighing competing interests/rights/harms, it's pretty important that you be able to explain your reasoning to another. So they can check it, so they can test it, and so they can apply it if it's good.

Not just because your decision could be wrong, but because the process by which we evolve our decisions is important (think precendent for law, blow up analysis for finance etc).

If we want to push our understanding of a domain forward, black boxes populated with a lot of data aren't super helpful.

They are able to spot complex patterns yes, many of which can be cleaned/restructed into simple patterns.

In reality most of the best uses of ML thusfar have been either rapidly screening/classification based on simple patterns (think OCR on a check - the character recognizer engine in the machine isn't really teaching us much about language or typology, it's just processing existing patterns), or in domains with extremely rigid game mechanics, where the rules never change but you can run a billion simulations (chess, go, video games etc).



Yes. I think the idea is once you have a predictive model, is to go thru computationally hard process of facrtorization - identifying inputs that if removed don’t affect predictability that much. Rinse and repeat until you have an explainable model.

Imho you won’t alwsys get an explainable model, because some times there may be too many factors that are predictive, but the effort is what’s important.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: