Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI is just a tool, like most other technologies, it can be used for good and bad.

Where are you going to draw the line? Only if it effects you, or maybe we should go back to using coal for everything, so the mineworkers have their old life back? Or maybe follow the Amish guidelines to ban all technology that threatens sense of community?

If you are going to draw a line, you'll probably have to start living in small communities, as AI as a technology is almost impossible to stop. There will be people and companies using it to it's fullest, even if you have laws to ban it, other countries will allow it.





You are thinking too small.

The goal of AI is NOT to be a tool. It's to replace human labor completely.

This means 100% of economic value goes to capital, instead of labor. Which means anyone that doesn't have sufficient capital to live off the returns just starves to death.

To avoid that outcome requires a complete rethinking of our economic system. And I don't think our institutions are remotely prepared for that, assuming the people runnign them care at all.


The Amish don’t ban all tech that can threaten community. They will typically have a phone or computer in a public communications house. It’s being a slave to the tech that they oppose (such as carrying that tech with you all the time because you “need” it).

I was told that Amish (elders) ban technology that separates you from God. Maybe we should consider that? (depending on your personal take on what God is)

> AI is just a tool, like most other technologies, it can be used for good and bad.

The same could be said of social media for which I think the aggregate bad has been far greater than the aggregate good (though there has certainly been some good sprinkled in there).

I think the same is likely to be true of "AI" in terms of the negative impact it will have on the humanistic side of people and society over the next decade or so.

However like social media before it I don't know how useful it will be to try to avoid it. We'll all be drastically impacted by it through network effects whether we individually choose to participate or not and practically speaking those of us who still need to participate in society and commerce are going to have to deal with it, though that doesn't mean we have to be happy about it.


Regardless of whether you use AI or social media, your happiness (or lack thereof) is largely under your own control.

>your happiness (or lack thereof) is largely under your own control.

Not really. Or at least, "just be happy" isn't a good response to someone homeless and jobless.


> The same could be said of social media

Yes, absolutely.

Just because it's monopolized by evil people doesn't mean it's inherently bad. In fact, mots people here have seen examples of it done in a good way.


> In fact, mots people here have seen examples of it done in a good way.

Like this very website we're on, proving the parent's point in fact.


>Like this very website we're on

I don't know if HN 2025 has been a good example of "in a good way".


Why not?

A crowd of people continually rooting against their best interests isn't exactly what's needed for the solidarity that people claim is a boon from social media. Its not as bad as other websites out there, but I've see these flags several times on older forums.

It won't be as hard as you think for HN to slip into the very thing they mock Instagram of today for being.


Or maybe they differ on the opinions of what their own best interests are, which aren't yours by definition.

Uh huh, that's always how it starts. "Well you're in the minority, majority prevails".

Yup, story of my life. I have on fact had a dozen different times where I chose not to jump off the cliff with peers. How little I realized back then how rare that quality is.

But you got your answer, feel free to follow the crowd. I already have migrations ready. Again, not my first time.


>Where are you going to draw the line?

How about we start with "commercial LLMs cannot give Legal, Medical, or Financial advice" and go from there? LLMs for those businesses need to be handled by those who can be held accountable (be it the expert or the CEO of that expert).

I'd go so far to try and prevent the obvious and say "LLM's cannot be used to advertise product". but baby steps.

>AI as a technology is almost impossible to stop.

Not really a fan of defeatism speak. Tech isn't as powerful as billionaire want you to pretend it is. It can indeed be regulated, we just need to first use our civic channels instead of fighting amongst ourselves.

Of course, if you are profiting off of AI, I get it. Gotta defend your paycheck.


So only the wealthy can afford legal, medical, and financial advice in your hypothetical?

What makes you think that in the world where only the wealthy can afford legal, medical, and financial advice from human beings, the same will be automatically affordable from AI?

It will be, of course, but only until all human competition in those fields is eliminated. And after that, all those billions invested must be recouped back by making the prices skyrocket. Didn't we see that with e.g. Uber?


If you're going to approach this on such bad faith, then I'll simply say "yes" and move on. People can male bad decisions, but that shouldn't be a profitable business.

If it is just a tool, it isn't AI. ML algorithms are tools that are ultimately as good or bad as the person using them and how they are used.

AI wouldn't fall into that bucket, it wouldn't be driven entirely by the human at the wheel.

I'm not sold yet whether LLMs are AI, my gut says no and I haven't been convinced yet. We can't lose the distinction between ML and AI though, its extremely important when it comes to risk considerations.


Silent down votes, any explanations or counter points?

Because no one defines AI the way you seem to do here. LLMs and machine learning are in the field of artificial intelligence, AI.

How is ML a subset of AI?

Machine learning isn't about developing anything intelligent at all, its about optimizing well defined problem spaces for algorithms defined by humans. Intelligence is much more self guided and has almost nothing to do with finding the best approximate solution to a specific problem.


> Machine learning (ML) is a field of study in _artificial intelligence_ concerned with the development and study of statistical algorithms that can learn from data and generalise to unseen data, and thus perform tasks without explicit instructions.

https://en.wikipedia.org/wiki/Machine_learning

You are free to define AI differently but don't be surprised if people don't share your unique definition.


The definition there is correct. ML is a a field of study in AI, that does not make it AI. Thermodynamics is a field of study in physics, that does not mean that thermodynamics is physics.

You know what, I'm going to take a walk.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: