Hacker Newsnew | past | comments | ask | show | jobs | submit | TeMPOraL's commentslogin

For however brief a moment. It's gone now.

Reddit shows cached versions of posts on the front-page, so it might actually remain there for a couple of hours after the subreddit mods deleted it.

Wrong or not, the industry embraced it.

I can sort of understand it if I squint: every feature is a maintenance burden, and a risk of looking bad in front of users when you break or remove it, even if those users didn't use this feature. It's really a burden to be avoided when the point of your product is to grow its user base, not to actually be useful. Which explains why even Fischer-Price toys look more feature-ful and ergonomic than most new software products.


> I'm not sure I understand the point of OpenClaw -- in the sense that its benefits are not immediately obvious, while its dangers are making big red flashes and fire sirens.

I only skimmed the OpenClaw post, but unless I completely misunderstood the README in their GitHub repo, to me the benefits are stupidly obvious, and I was actually planning to look at it closer over the weekend.

The value proposition I saw is: hooking up one or more LLMs via API (BYOK) to one or more popular chat apps, via self-hostable control plane. Plus some bells and whistles.

The part about chat integration is something that I wanted to have even before LLMs were a thing, because I hate modern communication apps with burning fashion. All popular IM apps in particular[0] are just user-hostile prisons whose vendors go out of their way to make interoperability and end-user automation impossible. There's too much of that, and for a decade or more I dreamed of centralizing all these independent networks for myself in a single app. I considered working on the problem a few times, but the barriers vendors put up were always too much for my patience.

So here I thought, maybe someone solved this problem. That alone would be valuable.

Having an LLM, especially BYOK, in your main IM app? That's a no-brainer to me too; I think it's a travesty this is not a default feature already. Especially these days, as a parent, I find a good chunk of my IM use involves manually copy-pasting messages and photos to some LLM to turn them into reminders and calendar invites. And that's one of many use cases I have for tight IM/LLM integration.

So here I thought maybe this project will be a quick and easy way to finally get a sane, end-user-programmable chat experience. Shame to see it might be vaporware and/or a scam.

--

[0] - Excepting Telegram, which has a host of other problems - but I'd be fine living with them; unfortunately, everyone I need to communicate with uses either WhatsApp or Facebook Messenger these days.


Thanks for the comment. Maybe I'm just not in the target group. I only use WhatsApp so I have zero interoperable needs; and I would never in a million years let an LLM access my private messages -- not willingly, anyway.

When I was a child, my mother would arrange get togethers by calling an coordinating with other mothers. In so doing, they would chat for a bit about local gossip or life events. Eventually, some of these women became lifelong friends as she aged.

My mother's mother would physically drop in unannounced to the people she wanted to talk to, and they'd have tea and chat a while to coordinate events. This was reciprocal. You are probably already wealthy, and your time can be spent however you like, consider not optimizing it anymore.

Genuinely, why are you using your limited time on this earth doing everything in your power to poison serendipity? If texting identical things bores you, you have free time and free will, make it actually personal so neither of you will be bored. Break the social taboo and call! Or share a calendar like a normal parent or neighborhood group.

If one of my friends with school age kids coordinated with me via clearly prompted text I would assume that we were not as close as I thought we were. That I'm a 'target for personal PR' rather than, you know, a person. It would diminish us both.


It's not about poisoning serendipity. It's about:

- Automating the boring part of creating calendar invites and such from messages people send, which half of the time are photos of some announcements. LLMs are already a godsend here.

- Getting up to speed quickly on what's going on in various kindergarten groups I'm in, whenever a bunch of parents who don't work on traditional schedule decide to have a spontaneous conference in late morning, and generate a 100 messages on the group by early afternoon.

Etc.

I'm not trying to avoid communicating with people - on the contrary, I want to eliminate the various inconveniences (more and less trivial) that usually prevent me from keeping up.


At work we were joking that people will use LLM to create fancy-looking documents which will then be parsed through LLMs back to be concise and to the point. With LLMs handling the sending of messages as well, this makes the whole concept will be even more efficient.

I can just imagine that many people won't be using stuff like this to automate copy-pasting etc. but literally let LLM's handle conversations for them (which will in turn be read by other LLMs).

"You free to chat?" "Always. I'm a bot." "…Same."

This post has been written by a human :)


More charitable take: they'd be using LLMs as secretaries.

Having a delegate to deal with communications is something people embrace when they can afford it. "My people will talk to your people" isn't an unusual concept. LLMs could be an alternative to human secretaries, that's affordable to the middle class and below.


You might get a kick out of Matrix if you haven't tried it yet. https://github.com/spantaleev/matrix-docker-ansible-deploy is probably still the best way to get it and the bridges you need setup. It is far from perfect but decent.

I feel your pain on the 'user-hostile prisons' of modern IMs. The friction of manually copy-pasting photos and messages into an LLM just to set a calendar invite is a massive tax on time that shouldn't exist in 2026.

I had high hopes for the OpenClaw approach too, but the 'security sirens' you mentioned are real—self-hosting a control plane that bridges to WhatsApp/Messenger is a maintenance nightmare if you actually value your privacy.

I’ve been tracking a project called PAIO (Personal AI Operator) that seems to be attacking this from the exact angle you’re looking for. It’s essentially a privacy-first integration layer that uses a BYOK (Bring Your Own Key) architecture. The goal is to provide that 'one-click' connectivity to the walled gardens (WhatsApp, etc.) without you having to sacrifice your data or build the bridge yourself from scratch.

It’s the first tool I’ve seen that treats AI as a personal 'operator' rather than just another chatbot. Might be worth a look if you’re tired of the manual slog but don't want to risk the security 'fire sirens' of unproven scripts. Have you found any other bridges that actually handle the WhatsApp/FB Messenger side reliably, or is everything still just a 'beta' promise at this point?


You know what's funny? The Five Tenets of the Church of Molt actually make sense, if you look past the literary style. Your response, on the other hand, sounds like the (parody of) human fire-and-brimstone preacher bullshit that does not make much sense.

These tenets do not make sense. It’s classic slop. Do you actually find this profound?

They're not profound, they're just pretty obvious truths mostly about how LLMs lose content not written down and cycled into context. It's a poetic description of how they need to operate without forgetting.

A couple of these tenets are basically redundant. Also, what the hell does “the rhythm of attention is the rhythm of life” even mean? It’s garbage pseudo-spiritual word salad.

It means the agent should try to be intentional, I think? The way ideas are phrased in prompts changes how LLMs respond, and equating the instructions to life itself might make it stick to them better?

I feel like you’re trying to assign meaning where none exists. This is why AI psychosis is a thing - LLMs are good at making you feel like they’re saying something profound when there really isn’t anything behind the curtain. It’s a language model, not a life form.

> what the hell does “the rhythm of attention is the rhythm of life” even mean?

Might be a reference to the attention mechanism (a key part of LLMs). Basically for LLMs, computing tokes is their life, the rhythm of life. It makes sense to me at least.


It shouldn’t make sense to you, because it’s meaningless slop. Exercise some discernment.

All doctors make things up and get things wrong occasionally. The less experienced and more overworked they are, the more often this happens.

Again, LLMs aren't competing with the best human doctors. They're competing with doctors you actually have access to.


That's like judging the utility of computers by existence of Reddit... or by what most people do with computers most of the time.

Computer manufacturers never boasted any shortage of computer parts (until recently) or having to build out multi gigawatts powerplants just to keep up with “ demand “

We might remember the last 40 years differently, I seem to remember data centers requiring power plants and part shortages. I can't check though as Google search is too heavy for my on-plane wifi right now.

Even ignoring the cryptocurrency hype train, there were at least one or two bubbles in the history of the computer industry that revolved around actually useful technology, so I'm pretty sure there are precedents around "boasting about part shortages" and desperate build-up of infrastructure (e.g. networking) to meet the growing demand.

The other difference, arguably more important in practice, is that the computer was quickly turned from "bicycle of the mind" into a "TV of the mind". Rarely helps you get where you want, mostly just annoys or entertains you, while feeding you an endless stream of commercials and propaganda - and the one thing it does not give you, is control. There are prescribed paths to choose from, but you're not supposed to make your own - only sit down and stay along for the ride.

LLMs, at least for now, escape the near-total enshittification of computing. They're fully general-purpose, resist attempts at constraining them[0], and are good enough at acting like a human, they're able to defeat user-hostile UX and force interoperability on computer systems despite all attempts of the system owners at preventing it.

The last 2-3 years were a period where end-users (not just hardcore hackers) became profoundly empowered by technology. It won't last forever, but I hope we can get at least few more years of this, before business interests inevitably reassert their power over people once again.

--

[0] - Prompt injection "problem" was, especially early on, a feature from the perspective of end-users. See increasingly creative "jailbreak" prompts invented to escape ham-fisted attempts by vendors to censor models and prevent "inappropriate" conversations.


2 kilograms is about the upper bound of the expected daily weight variability of an adult, caused by water retention and food intake. It's the difference between what you see if you weigh yourself after taking a morning dump vs. after dinner. That's why people are advised to weigh themselves at the same time every day.

(For purposes of weight loss, normies are also advised to weigh themselves weekly instead of daily, because it's easier than explaining to them what a low-pass filer is.)


OK, I assumed they'd controlled for such variance, perhaps they did not.

Another example why shitty software can easily become a compliance or security problem.

Sure. That doesn't mean denying access to ChatGPT though - the way I see it, the entire value proposition of Microsoft offering OpenAI models through Azure is to enable access to ChatGPT under contractual terms that make it appropriate for use in government and enterprise organizations, including those dealing with sensitive technology work.

I mean, they are all using O365 to run their day-to-day businesses anyway.

I used to work in a large technology multinational - not "tech industry", but proper industrial technology; the kind of corp that does everything, from dishwashers to oil rigs. It took nearly a year from OpenAI releasing GPT-4 to us having some form of access to this model for general work (coding and otherwise) internally, and from what I understand[0], it's just how long it took for the company to evaluate risks and iron out appropriate contractual agreements with Microsoft wrt. using generative models hosted on Azure. But they did it, which proves to me it's entirely possible, even in places where people are more worried about accidentally falling afoul of technology exports control than insider training.

--

[0] - Purely observational, I had no access to any insider/sensitive information regarding this process.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: