Hacker Newsnew | past | comments | ask | show | jobs | submit | causalmodels's commentslogin

Brand name pharmaceuticals are sort of a different thing. Brand names must comply with the naming guidelines of the FDA, European Medicines Agency, and HealthCanda simultaneously. In practice, this makes it tricky to use actual words. So my companies adopt an 'empty vessel' naming approach. The empty vessels are nonsense words that (1) invoke an emotion (wegovy is a good example), (2) can be trademarked, and (3) it can survive brand pressure.


I just asked it and it said that it uses the on device TTS capabilities.


I find it very unlikely that it would be trained on that information or that anthropic would put that in its context window, so it's very likely that it just made that answer up.


No, it did not make it up. I was curious so I asked it asked it to imitate a posh British accent imitating a South Brooklyn accent while having a head cold and it explained that it didn't have have fine grained control over the audio output because it was using a TTS. I asked it how it knew that and it pointed me towards [1] and highlighted the following.

> As of May 29th, 2025, we have added ElevenLabs, which supports text to speech functionality in Claude for Work mobile apps.

Tracked down the original source [2] and looked for additional updates but couldn't find anything.

[1] https://simonwillison.net/2025/May/31/using-voice-mode-on-cl...

[2] https://trust.anthropic.com/updates


If it does a web search that's fine, I assumed it hadn't since you hadn't linked to anything.

Also it being right doesn't mean it didn't just make up the answer.


The dashed lines on top of the data points and labels is making me wince


The ads you're going to need to worry about are not going to be shown on webpages.


Are you implying that they are going to be inside of the chat response


They are going to be the chat response.


Yes, they are hiring for it. They want you to use their own apps instead of a web browser so that blocking tech cannot be created for it.

https://sandstormdigital.com/2025/10/16/openai-is-building-i...

https://www.contentgrip.com/openai-internal-ad-infrastructur...


If the ads are just brought in as a stream of text from the same endpoint that's streaming you the response you're wanting, how can that be blocked in the browser anyway?

Another local LLM extension that reads the output and determines if part of it is too "ad-ey" so it can hide that part?


It will depend on how they implement the sponsored content. If there are regulations that require marking it as sponsored, that makes it easy to block. If not, then sure maybe via LLMs.


These numbers aren't that crazy when contextualized with the capex spend. One hundred million is nothing compared to a six hundred billion dollar data center buildout.

Besides, people are actively being trained up. Some labs are just extending offers to people who score very highly on their conscription IQ tests.


Given how things are going, I expect to I'll need to age verify accounts that are almost twenty years.


Not defending the verification stuff, but interestingly Newgrounds has considered that.

> If your account is more than ten years old, we will assume you are currently over 18.

https://www.newgrounds.com/bbs/topic/1548205


Why does an ecommerce website need a profit sharing agreement?


Why would they want an LLM to slurp their web site to help some analyst create a report about the cost of widgets? If they value the data they can pay for it. If not, they don't need to slurp it, right? This goes for training data too.


The alternative is the AI only telling customers about competitors wares


That's for the publisher to decide. Your argument reminds me of the old chestnut "We're paying you in publicity!"


In your example the person is receiving something of tangible value and expecting to pay in near worthless coin.

In the circumstances the merchant would be expecting to receive a valuable service and simultaneously get paid for getting serviced.

More akin to Google paying to index you or going to a lady of the evening and holding your hand out for a tip after.


Lump of labor fallacy strikes again.



Sorry but I'm not going to pay 56 dollars to read a 17 year old paper with 18 citations.


Yes, there will be many new jobs as cannon fodder for the upper classes in various overseas wars.


Or like as therapists as we continue to 'do the work' via buying new and increasingly strange forms of self exploration.

But also get real if you don't believe that kill bots won't take all the cannon fodder jobs. Computer programs don't have the nasty habit of disobeying orders or revolt.


This is really nice. Any chance you have conversation branching on the roadmap?


Thank you, usefule feature, just added to the roadmap. If you are intereseted, follow product updates on https://x.com/MatteoRicupero or here https://contextch.at/mailing-list if you prefer emails


Conversation branching released! send a message if you want to share your opinion!


Personally I think people should generally be polite and respectful towards the models. Not because they have feelings, but because cruelty degrades those who practice it.


Computers exist to serve. Anthropomorphising them or their software programming is harmful¹. The tone of voice an officer would use to order a private or ranking to do something seems suitable — which obviously comes down to terse, clear, unambiguous queries and commands.

Besides, humans can switch contexts easily. I don't talk to my wife in the same way I do to a colleague, and I don't talk to a colleague like I would to a stranger, and that too depends on context (is it a friendly, neutral, or hostile encounter?).

1: At this point. I mean, we haven't even reached Kryten-level of computer awareness and intelligence yet, let alone Data.


> tone of voice an officer would use to order a private

Most people probably don't have the mental aptitude to be in that sort of position without doing some damage to their own psyche. Generally speaking, power corrupts. Militaries have generally come up with methods of weeding people out but it's still a problem. I think even if it's just people barking orders at machines, it has the potential to become a social problem for at least some people.

As for anthropomorphising being bad, it's too late. That ship sailed for sure as soon as we started conversing with machines in human languages. Humans already have an innate tendency to anthropomorphize, even inanimate objects like funny shaped boulders that kind of look like a person if you squint at it from an angle. And have you seen how people treat dogs? Dogs don't even talk.

Maybe it's harmful, but there's no stopping it.


> Anthropomorphising them or their software programming is harmful¹.

LLMs are trained on internet data produced by humans.

Humans tend to appreciate politeness and go to greater lengths answering polite questions, hence the LLMs will also mimic that behavior because that's what they're trained on.


I agree! I try to remember to prompt as if I were writing to a colleague because I fear that if I get in the habit of treating them like a servant, it will degrade my tone in communicating with other humans over time.


Agreed. I caught some shit from some friends of mine when I got mildly annoyed that they were saying offensive things to my smart speakers, and yeah on the one hand it's silly, but at the same time... I dunno man, I don't like how quickly you turned into a real creepy bastard to a feminine voice when you felt you had social permission to. That's real weird.


Yes. I tend to be polite to LLMs. I admit that part of the reason is that I'm not 100% sure they're not conscious, or a future version could become so. But the main reason is what you say. Being polite in a conversation just feels like the right thing to me.

It's the same reason why I tend to be good to RPG NPCs, except if I'm purposefully role playing an evil character. But then it's not me doing the conversation, it's the character. When I'm identifying with the character, I'll always pick the polite option and feel bad if I mistreat an NPC, even if there's obviously no consciousness involved.


I think we can look at examples:

People who are respectful of carved rocks, eg temple statues, tend to be generally respectful and disciplined people.

You become how you act.


That simply means you were raised right. :)


Yes, if you're communicating with a human language it pays off to reinforce, not undermine, good habits of communication.


ding ding ding.

If you're rude to an LLM, those habits will bleed into your conversations with barista/etc.


I think it depends on the self-awareness of the user. It's easy to slip into the mode of conflating an LLM with a conscious being, but with enough metacognition one can keep them separate. Then, in the same way that walking on concrete doesn't make me more willing to walk on a living creature, neither does my way of speaking to an LLM bleed into human interactions.

That said, I often still enjoy practicing kindness with LLMs, especially when I get frustrated with them.


Possibly. But that’s not the fault of any person except he who forced a fake social actor into our midst.

It’s wrong to build fake humans and then demand they be treated as real.


It seems like by default, the LLMs I've used tend to come across as eager to ask follow-up questions along the lines of "what do you think, x or y?" or "how else can I help you with this?" I'm going to have to start including instructions not to do that to avoid getting into a ghosting habit that might affect my behavior with real people.


Not necessarily, people will change behaviour based on context. Chat vs email vs HN comments, for example.


I think people in general are not all that great at doing this.

Anecdotal, but I grew up in a small town in rural New England, a few hours from NYC and popular with weekenders and second-home owners from there. I don’t think that people from NYC are inherently rude, but there’s a turbulence to life in NYC to where jockeying for position is somewhat of a necessity. It was, however, transparently obvious in my hometown that people from the city were unable to turn it off when they arrived. Ostensibly they had some interest in the slow-paced, pastoral village life, but they were readily identifiable as the only people being outwardly pushy and aggressive in daily interactions. I’ve lived in NYC for some time now, and I recognize the other side of this, and feel it stemmed less from inherent traits and more from an inability to context switch behavior.


... cos, I mean, what's the difference between ai and a barista? Both are basically inanimate emotion-free zones, right?


I wish in modern society i could assume the /s here.


Saying thank you to a plant for growing you a fruit is strange behavior. Saying thank you to a LLM for growing you foobar is also strange behavior. Not doing either is not degrading behavior of the grower.


Disagree wrt practicing gratitude towards resources consumed and tools utilized. Maybe it doesn't degrade you if you don't but I think it gives a bit more perspective


I think we agree on this if you agree that practicing gratitude in life and directly practicing it on non-sentient objects are not the same thing. Going to church to pray, going to therapy, practicing mindfulness, etc. isn't the same thing as seeing each grape growing on a vine as an anthropomorphic object. Don't anthropomorphize your lawnmower.


You also don't communicate with human language to your lawnmower to get it to work.



Many hunters say thank you to animals they just killed. Strange, or respectful. Depends on your perspective and cultural context.

LLMs are bound to change society in ways that seem strange to people stuck in outdated contexts.


> Saying thank you to a LLM.

Saying thank you to a LLM is indeed useless, but asking politely could appeal to the training data and produce better results because people who asked politely on the internet got better answers and that behavior could be baked into the LLM models.


Where do you draw the line though? I know some people that ask Google proper questions like "how do I match open tagx except XHTML self-contained tags using RegEx?" whereas I just go "html regex". Some people may even add "please" and "thank you" to that.

I doubt anyone is polite in a terminal, also because it's a syntax error. So the question is also, do you consider it a conversation, or a terminal?


Agreed.

When asked if they observed etiquette, even when alone, Miss Manners replied (from memory):

"We practice good manners in private to be well mannered in public."

Made quite the impression on young me.

A bit like the cliché:

"A person's morals are how they behave when they think no one is watching."


You're nice to AI for your own well being. I'm nice to AI so they spare me when they eventually become our overlords. We are not the same.


> You're nice to AI for your own well being. I'm nice to AI so they spare me when they eventually become our overlords.

Ahh, the full spectrum of human motivation -- niceness for the sake of it, fear and let me add my machiavellianism -- I think being polite in your query produces better results.


A lot of wisdom and virtue in this comment; I appreciate that.


Reminds of the elderly woman adding "please" to her queries in Google:

https://www.theguardian.com/uk-news/2016/jun/16/grandmother-...


The study is not about cruelty, but rather politeness. Impoliteness is not anything like cruelty.

Meanwhile, there is no such thing as cruelty toward a machine. That’s a meaningless concept. When I throw a rock at a boulder to break it, am I being cruel to that rock? When I throw away an old calculator, is that cruelty? What nonsense.

I do think it is at the very least insulting and probably cruel and abusive to build machines that assume an unearned, unauthorized standing the social order. There is no moral basis for that. It’s essentially theft of a solely human privilege, that can only legitimately be asserted by a human on his own behalf or on behalf of another human.

You don’t get to insist that I show deference and tenderness toward some collection of symbols that you put in a particular order


When coding with LLMs, they always make these dumb fucking mistakes, or they don't listen to your instructions, or they start changing things you didn't ask it to... it's very easy to slip and become gradually more rude until the conversation completely derails. I find that forcing myself to be polite helps me keep my sanity and keep the conversation productive.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: