The range of attitudes in there is interesting. There are a lot of people who take a fairly sensible "this is interactive fiction" kind of attitude, and there are others who bristle at any claim or reminder that these relationships are fictitious. There are even people with human partners who have "married" one or more AIs.
Yes. My experience is that it doesn't require scolding, mocking, or criticizing anyone to get permabanned. Just being up front about the fact that you have concerns about the use case is enough for a permaban, even if you only bring that up in order to demonstrate that such a position does not stem from contempt for LLM-as-companion users. :-\
do you think they know they're just one context reset away from the llm not recognizing them at all and being treated like a stranger off the street? For someone mentally ill and somehow emotionally attached to the context it would be... jarring to say the least.
Many of them are very aware of how LLMs work, they regularly interact with context limits and there have been threads about thoughtfully pruning context vs letting the LLM compact, making backups, etc.
Generally yes, they experience that routinely and complain and joke about it. Some of them do also describe such jarring experiences as making them cry for a long time.
If you can be respectful and act like a guest, it's worth reading a little there. You'll see the worrisome aspects in more detail but also a level of savvy that sometimes seems quite strange given the level of attachment. It's definitely interesting.
And it's a pity that this highly prevalent phenomenon (to exaggerate a bit, probably the way tech in general will become the most influential in the next couple years) is barely mentioned on HN.
- a large number of incredibly fragile users
- extremely "protective" mods
- a regular stream of drive-by posts that regulars there see as derogatory or insulting
- a fair amount of internal diversity and disagreement
I think discussion on forums larger than it, like HN or popular subreddits, is likely to drive traffic that will ultimately fuel a backfiring effect for the members. It's inevitable, and it's already happening, but I'm not sure it needs to increase.
I do think the phenomenon is a matter of legitimate public concern, but idk how that can best be addressed. Maybe high-quality, long form journalism? But probably not just cross-posting the sub in larger fora.
Part of me thinks maybe I erred bringing this up, but there's discussions worth having in terms of continued access to software that's working for people regardless of what it is, and on if this is healthy. I'm probably on a live and let live on this but there's been cases of suicide and murder where chatbots were involved, and these people are potentially vulnerable to manipulation from the company.
The percentage I mentioned was an example of how a very small prevalence can result in a reasonable number of people, like enough to fill a subreddit, because ChatGPT has a user count that exceeds all but 3 countries of the world.
Again, do you have anything behind this "highly prevalent phenomenon" claim?
Any sub that is based on storytelling or reposting memes, videos etc. are karma farms and lies.
Most subs that are based on politics or current events are at best biased, at worst completely astroturf.
The only subs that I think still have mostly legit users are municipal subs (which still get targeted by bots when anything political comes up) and hobby subs where people show their works or discuss things.
It's a growing market, although it might be because of shifting goal posts. I had a friend whose son was placed in French immersion (a language he doesn't speak at all). From what I was understanding, he was getting up and walking around in kindergarten and was labelled as mentally divergent; his teachers apparently suggested to his mother that he see a doctor.
(Strangely these "mental illnesses" and school problems went away after he switched to an English language school, must be a miracle)
I assume the loneliness epidemic is producing similar cases.
> I had a friend whose son was placed in French immersion (a language he doesn't speak at all).
In my entire french immersion Kindergarden class, there was a total of one child who already spoke French. I don't think the fact that he didn't speak the language is the concern.
There is/was an interesting period where "normies" were joining twitter en-masse, and adopted many of the denizens ideas as normal widespread ideas. Kinda like going on a camping trip at "the lake" because you heard it's fun and not realizing that everyone else on the trip is part of a semi-deranged cult.
The outsized effect of this was journalists thinking these people on twitter were accurate representations of what society on the whole was thinking.
It was hackers building crazy stuff (Steve Woz' book detailed how he built one of the company's first computers, and he knew what every logic gate in it did) and then some people realized there's money to be made...
Any "founders" out there showing off their vibe-coded SaaS with money from their FAANG career that they got after finishing the bootcamp course? (I mock, as the inner voice asks "You had the talent, why aren't you in the 2 commas club?")
It's why artists despise the AI art users. In that field it isn't simply them trying to contribute but instead insisting that you wasted your time learning to create art and if you're a professional you deserve to starve. All while being completely ignorant to the medium or the process.
Many artists through the ages have learned to work in various mediums, like sculpture of materials, oil painting, watercolors, fresco or whatever. There are myriad ways to express your visual art using physical materials.
Likewise, a girlfriend of mine was a college-educated artist, and she had some great output in all sorts of media, and had a great grasp of paints, and paper and canvas and what-have-you.
But she was also an Amiga aficionado, and then worked on the PCs I had, and ultimately the item she wanted most in life was a Wacom Tablet. This tablet was a force-multiplier for her art, and allowed her some real creative freedom to work in digital mediums and create art with ease that was unheard-of for messy oil paintings or whatever on canvas in your garage (we actually lived in a converted garage anyway.)
So, digital art was her saving grace, but also a significant leveler of playing fields. What would distinguish her original creativity from A.I.-generated stuff later on? Really not much. You could still make an oil or watercolor painting that is obviously hand-made. Forgeries of great artists have been perpetrated, but most of us can't explain, e.g. the Shroud of Turin anyway.
So generative A.I. is competing in these digital mediums, and perhaps 3D-printing is competing in the realm of physical objects, but it's unfortunate for artists that their choices have narrowed so far, that they are practically required to work in digital media exclusively, and master those apps, and therefore, they compete with gen A.I. in the virtual realm. That's just how it's gonna be, until folks go back to sculpting marble and painting soup cans.
FWIW, even in physical medium, artists have huge competition with "factory art", i.e. a lot of low-paid laborers creating paintings and drawings for cheap. Quantity, not quality, is the name of the game here - and this is the art that adorns all the offices and hallways around the world.
It's basically like GenAI, but running on protein substrate instead of silicon one.
And even in the digital realm, artists already spent the last decade+ competing with equivalent "factory art", too. Advertising stands on art, and most of that isn't commissioned, it's rented or bought for cheap from stock art providers, and a lot of supply there comes from people and organizations who specialize in producing art for them. The OG slop art, before AI.
EDIT: there's some irony here, in that people like to talk about how GenAI might (or might already be) start putting artists out of work. But I haven't seen anyone mention that the AI has already put slop creators out of work.
Funny, reading your comment I had the idle thought: I mostly really see callousness towards artists coming from people retaliating after being belittled by artists for using AI
And here's your response to what felt like a pretty good faith response that deserved at most an equally earnest answer, and at worst no response.
Enough people have gotten owned for using these things in court that I think the more likely response is laughing at the ignorance then feeling threatened.
1. Get owned in court because you used an LLM that made a poor legal argument.
2. Get owned out of court because you couldn't afford the $100K (minimum) that you have to pay to the lawyer's cartel to even be able to make your argument in front of a judge.
I'll take number 1. At least you have a fighting chance. And it's only going to get better. LLMs today are the worst they will ever be, whereas the lawyer's cartel rarely gets better and never cuts its prices.
It's going to cost you around $100K if you're lucky, and it could be a lot more. That's what I mean. There are no exact numbers because it depends on how many hours of lawyering it takes to get through the endless process and procedure (designed by lawyers, of course) before you ever even go to court. You can't know that in advance. And if the other side has more money than you, they know its to their advantage, so they will try to drag out the process and bleed you dry to gain leverage or even force you to drop the case.
That's assuming you are the one doing the suing and not the one getting sued. And even then, that applies to only very limited types of cases. And even then, the contingency is typically 33% (and sometimes can even eat over 50%) of your damages awarded, so the cost is massive in any case.
There is the option of small claims court which is massively cheaper, but it has very low limits for damages, so it's barely worth the effort.
reply