Brand name pharmaceuticals are sort of a different thing. Brand names must comply with the naming guidelines of the FDA, European Medicines Agency, and HealthCanda simultaneously. In practice, this makes it tricky to use actual words. So my companies adopt an 'empty vessel' naming approach. The empty vessels are nonsense words that (1) invoke an emotion (wegovy is a good example), (2) can be trademarked, and (3) it can survive brand pressure.
I find it very unlikely that it would be trained on that information or that anthropic would put that in its context window, so it's very likely that it just made that answer up.
No, it did not make it up. I was curious so I asked it asked it to imitate a posh British accent imitating a South Brooklyn accent while having a head cold and it explained that it didn't have have fine grained control over the audio output because it was using a TTS. I asked it how it knew that and it pointed me towards [1] and highlighted the following.
> As of May 29th, 2025, we have added ElevenLabs, which supports text to speech functionality in Claude for Work mobile apps.
Tracked down the original source [2] and looked for additional updates but couldn't find anything.
If the ads are just brought in as a stream of text from the same endpoint that's streaming you the response you're wanting, how can that be blocked in the browser anyway?
Another local LLM extension that reads the output and determines if part of it is too "ad-ey" so it can hide that part?
It will depend on how they implement the sponsored content. If there are regulations that require marking it as sponsored, that makes it easy to block. If not, then sure maybe via LLMs.
These numbers aren't that crazy when contextualized with the capex spend. One hundred million is nothing compared to a six hundred billion dollar data center buildout.
Besides, people are actively being trained up. Some labs are just extending offers to people who score very highly on their conscription IQ tests.
Why would they want an LLM to slurp their web site to help some analyst create a report about the cost of widgets? If they value the data they can pay for it. If not, they don't need to slurp it, right? This goes for training data too.
Or like as therapists as we continue to 'do the work' via buying new and increasingly strange forms of self exploration.
But also get real if you don't believe that kill bots won't take all the cannon fodder jobs. Computer programs don't have the nasty habit of disobeying orders or revolt.
Personally I think people should generally be polite and respectful towards the models. Not because they have feelings, but because cruelty degrades those who practice it.
Computers exist to serve. Anthropomorphising them or their software programming is harmful¹. The tone of voice an officer would use to order a private or ranking to do something seems suitable — which obviously comes down to terse, clear, unambiguous queries and commands.
Besides, humans can switch contexts easily. I don't talk to my wife in the same way I do to a colleague, and I don't talk to a colleague like I would to a stranger, and that too depends on context (is it a friendly, neutral, or hostile encounter?).
1: At this point. I mean, we haven't even reached Kryten-level of computer awareness and intelligence yet, let alone Data.
> tone of voice an officer would use to order a private
Most people probably don't have the mental aptitude to be in that sort of position without doing some damage to their own psyche. Generally speaking, power corrupts. Militaries have generally come up with methods of weeding people out but it's still a problem. I think even if it's just people barking orders at machines, it has the potential to become a social problem for at least some people.
As for anthropomorphising being bad, it's too late. That ship sailed for sure as soon as we started conversing with machines in human languages. Humans already have an innate tendency to anthropomorphize, even inanimate objects like funny shaped boulders that kind of look like a person if you squint at it from an angle. And have you seen how people treat dogs? Dogs don't even talk.
> Anthropomorphising them or their software programming is harmful¹.
LLMs are trained on internet data produced by humans.
Humans tend to appreciate politeness and go to greater lengths answering polite questions, hence the LLMs will also mimic that behavior because that's what they're trained on.
I agree! I try to remember to prompt as if I were writing to a colleague because I fear that if I get in the habit of treating them like a servant, it will degrade my tone in communicating with other humans over time.
Agreed. I caught some shit from some friends of mine when I got mildly annoyed that they were saying offensive things to my smart speakers, and yeah on the one hand it's silly, but at the same time... I dunno man, I don't like how quickly you turned into a real creepy bastard to a feminine voice when you felt you had social permission to. That's real weird.
Yes. I tend to be polite to LLMs. I admit that part of the reason is that I'm not 100% sure they're not conscious, or a future version could become so. But the main reason is what you say. Being polite in a conversation just feels like the right thing to me.
It's the same reason why I tend to be good to RPG NPCs, except if I'm purposefully role playing an evil character. But then it's not me doing the conversation, it's the character. When I'm identifying with the character, I'll always pick the polite option and feel bad if I mistreat an NPC, even if there's obviously no consciousness involved.
I think it depends on the self-awareness of the user. It's easy to slip into the mode of conflating an LLM with a conscious being, but with enough metacognition one can keep them separate. Then, in the same way that walking on concrete doesn't make me more willing to walk on a living creature, neither does my way of speaking to an LLM bleed into human interactions.
That said, I often still enjoy practicing kindness with LLMs, especially when I get frustrated with them.
It seems like by default, the LLMs I've used tend to come across as eager to ask follow-up questions along the lines of "what do you think, x or y?" or "how else can I help you with this?" I'm going to have to start including instructions not to do that to avoid getting into a ghosting habit that might affect my behavior with real people.
I think people in general are not all that great at doing this.
Anecdotal, but I grew up in a small town in rural New England, a few hours from NYC and popular with weekenders and second-home owners from there. I don’t think that people from NYC are inherently rude, but there’s a turbulence to life in NYC to where jockeying for position is somewhat of a necessity. It was, however, transparently obvious in my hometown that people from the city were unable to turn it off when they arrived. Ostensibly they had some interest in the slow-paced, pastoral village life, but they were readily identifiable as the only people being outwardly pushy and aggressive in daily interactions. I’ve lived in NYC for some time now, and I recognize the other side of this, and feel it stemmed less from inherent traits and more from an inability to context switch behavior.
Saying thank you to a plant for growing you a fruit is strange behavior. Saying thank you to a LLM for growing you foobar is also strange behavior. Not doing either is not degrading behavior of the grower.
Disagree wrt practicing gratitude towards resources consumed and tools utilized. Maybe it doesn't degrade you if you don't but I think it gives a bit more perspective
I think we agree on this if you agree that practicing gratitude in life and directly practicing it on non-sentient objects are not the same thing. Going to church to pray, going to therapy, practicing mindfulness, etc. isn't the same thing as seeing each grape growing on a vine as an anthropomorphic object. Don't anthropomorphize your lawnmower.
Saying thank you to a LLM is indeed useless, but asking politely could appeal to the training data and produce better results because people who asked politely on the internet got better answers and that behavior could be baked into the LLM models.
Where do you draw the line though? I know some people that ask Google proper questions like "how do I match open tagx except XHTML self-contained tags using RegEx?" whereas I just go "html regex". Some people may even add "please" and "thank you" to that.
I doubt anyone is polite in a terminal, also because it's a syntax error. So the question is also, do you consider it a conversation, or a terminal?
> You're nice to AI for your own well being. I'm nice to AI so they spare me when they eventually become our overlords.
Ahh, the full spectrum of human motivation -- niceness for the sake of it, fear and let me add my machiavellianism -- I think being polite in your query produces better results.
The study is not about cruelty, but rather politeness. Impoliteness is not anything like cruelty.
Meanwhile, there is no such thing as cruelty toward a machine. That’s a meaningless concept. When I throw a rock at a boulder to break it, am I being cruel to that rock? When I throw away an old calculator, is that cruelty? What nonsense.
I do think it is at the very least insulting and probably cruel and abusive to build machines that assume an unearned, unauthorized standing the social order. There is no moral basis for that. It’s essentially theft of a solely human privilege, that can only legitimately be asserted by a human on his own behalf or on behalf of another human.
You don’t get to insist that I show deference and tenderness toward some collection of symbols that you put in a particular order
When coding with LLMs, they always make these dumb fucking mistakes, or they don't listen to your instructions, or they start changing things you didn't ask it to... it's very easy to slip and become gradually more rude until the conversation completely derails. I find that forcing myself to be polite helps me keep my sanity and keep the conversation productive.