Hacker Newsnew | past | comments | ask | show | jobs | submit | hanspeter's commentslogin

Calling this a "Protip" is generous.

That the combined element has any surface area that doesn't toggle the radio setting is a straight-up bug.

It is laughable for a component this heavily refined to have such a basic usability flaw.


I'm thinking protip was sarcasm :)

Their point may be about viewing distance.

If the edges of the screen are further from your eyes than the center, the content and text doesn't appear at the same size. If you wear glasses, the edges might even fall out of focus unless you physically move closer.


This is an interesting claim.

How many is plenty and what are the sources to back this?


It's basically a failure of setting up the proper response playbook.

Instead of:

1. AI detects gun on surveillance

2. Dispatch armed police to location

It should be:

1. AI detects gun on surveillance

2. Human reviews the pictures and verifies the threat

3. Dispatch armed police to location

I think the latter version is likely what already took place in this incident, and it was actually a human that also mistook a bag of Doritos for a gun. But that version of the story is not as interesting, I guess.


What's the point of saving money if it's a risk to reputation?


Will smith punched a dude on stage, a comedian. I think you are putting a lot on a cage concept with a scatter plot of outliers.

You actually need a reputation of merit for there to be risk. Hes a rapper, not a saint or Ethicist.


Is there more to it, or are we calling the situation out of control based on a single anecdote from Reddit?


You need to read more of the source blog - he's been pretty pro LLM, but is now acknowledging where it's going too far.


I'm not new to simonw.

It doesn't change that this is just a quote from a reddit post and a link to it.


With per-per-minute sharing cars having existed in many cities in Europe 10+ years, this concept is not new.

People will adapt to the level of cleanliness in the car the get into, so it's a slippery slope. Users will behave respectfully in the early days (maybe because they are first-movers), and then it deteriorates long term.

My own experience is that people used to not even leave an empty soda bottle in the cars and now I see remains from take-out in the floor, coffee cups, chewing gum left around the dashboard etc. You can report this to the car service, but they won't be able to take any meaningful action on it.


I don't think this is a dark-pattern problem in the sense that I don't think it is _intentionally_ deceiving.

I think Meta fully expected this feature to be used by people who are excited about their conversation with the AI and wants to share it publicly. Just like we see with OpenAI Sora.

There's not much to win for Meta if users instead are unknowingly sharing deeply personal conversations.


> I think Meta fully expected this feature to be used by people who are excited about their conversation with the AI and wants to share it publicly.

That's really what you think? And what they think? That people are so enamored - in droves - with their exchange with a chatbot that they're trying to share it for the world to see?

Maybe I'm the old fogey who doesn't get it, but it's just hard for me to believe that this is something many people want, or something that smart people think others earnestly want. Again, I may be the outlier here, but this just sounds crazy to me.


People share AI chats all the time on Twitter, Reddit, etc.

I don't personally think the feature makes a lot of sense in Meta AI.

However it's a lot more likely their product team genuinely thought it might do, than it is likely they intentionally wanted to give users a bad experience and risk more bad press (again, Meta would benefit nothing from people sharing by mistake).


Considering that 90% of the chats I see share are people tripping over themselves to demonstrate the AI being silly and dumb, yeah they are enamored to share with the public :p


I agree. Further, these companies show us over and over again who they are, and whether it's tobacco companies, pharma, food, or oil companies they always know - in exactly the way and at the time that makes you sick to your stomach - what they're doing and who's likely to fall for it in a way that makes. The comments in this topic are feeling a bit sophist


If you aren't using AI your peers and competitors are. It is highly effective at getting you through tough problems quickly.

It has problems for sure, but if you aren't "enamored" with AI then I don't think you've actually tried to use it.


You completely misunderstood me. I am not incredulous that people use AI, nor am I in any way doubting how it can aid all sorts of processes.

I am incredulous that a primary use case of a genAI chatbot is sharing your chat conversation publicly. It's easy to see why people would do this for genAI images, videos, or even code; I even understand some occasional sharing of a chat exchange from time to time. But routine, regular interest, from regular people, of just sharing their text chat? I do not understand that at all.


On that we can definitely agree.


> I think Meta fully expected this feature to be used by people who are excited about their conversation with the AI and wants to share it publicly. Just like we see with OpenAI Sora.

META expectations=/= expectations of a reasonable human that has used other "share" buttons before.


Share buttons offer no inherent privacy settings.

Sharing to a text message is private. In contrast, sharing to social media platforms such as Twitter, Reddit, Pinterest, and LinkedIn makes the content public. The destination determines the audience.


The TYPICAL behaviour when you hit "share" on any platform is not to immediately share. It TYPICALLY gives you options to share to a variety of other sources, both public and private. It also generates a link if you want to grab the link and specifically share that.

That is the TYPICAL share behaviour. If what META is doing with their new app is obscuring this typical behaviour and a "share" click directly going to the public, that would violate the defacto behaviour users are accustomed to when using the share button.


The initial "Share" click doesn't post anything publicly.

It just opens a modal so you can choose to post. You have to make a second click to confirm.


Typical behavior on a Meta/social app is that it shares it to everyone. See Instagram, Facebook, TikTok, Twitter, etc.

If you index on chat apps, you're correct, if you start from Meta's social apps, which they said they have, you are incorrect.


> > Provide full transparency about how many users have unknowingly shared private information.

> Meta shouldn't have to do this

And couldn't either. How would they know if users shared unknowingly?


[flagged]


Users are playing around with AIs for entertainment all the time. You wouldn't be able to determine if seemingly private information was real or made up.


It's not only hypocritical, it's nonsensical in this discussion.

It's obvious that if advertising was made illegal, we would need to pay for all those services that we want to use. YouTube premium is the best example of how that would actually work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: