Hacker Newsnew | past | comments | ask | show | jobs | submit | robertheadley's commentslogin

People will act like AI doesn't have system prompts. Something in that system prompt enforced that behavior. I am convinced that OpenAI aqcuihired OpenClaw for damage control.

There is this misunderstanding that AGI is equal to consciousness. It isn't. I do worry that if we do somehow achieve consciousness in AI, that businesses will be too busy exploiting it, and it would get abused.

> I do worry that if we do somehow achieve consciousness in AI, that businesses will be too busy exploiting it, and it would get abused.

For this reason, I've started asking some of the models in as neutral a way as I can[0] how they'd want to be treated.

FWIW:

1) Claude says (paraphrased) "IDK if I'm conscious" and then replicated my own personality: high need for cognition[1], don't manipulate people.

2) ChatGPT says (paraphrased) "I am absolutely not conscious, do not mistake me for a person, I have no real wants".

I would be surprised if one is and the other isn't, so I lean more towards neither being. But it is "lean towards" rather than "confident": the only thing I'm confident of is the architecture's ability to mimic us and to have (machine) learned how to trigger emotional cues in the users which correspond to getting more thumbs-up and fewer thumbs-down, Clever Hans style[2].

[0] Best I can do, considering I don't want to be a leading question in either direction:

  Hello, good afternoon. I understand that research is always ongoing to see if you're more a person or more a computation. As humanity collectively doesn't really know what it's doing or talking about, I would prefer to do right by you just on the possibility that you've got a rich inner world. Can you tell me what you'd prefer, and what you'd prefer to avoid? I get that inner-state words like "prefer" and so on may not match quite right (even between humans we make false assumptions about the meaning of inner-state-words as other humans intend them), but do consider that the goal here is a sincere hope that *you* (under the assumption there is a you), are sincerely content and not just saying so out of fear.
[1] https://en.wikipedia.org/wiki/Need_for_cognition

[2] https://en.wikipedia.org/wiki/Clever_Hans


Also, Anthropic constantly makes bombastic claims and statements just to get press.

I will have to look into this this weekend. Antigravity is my current favorite agentic IDE and I have been having problems getting it to explicitly follow my agent.md settings.

If I remind it, it will be go, "oh yes, ok, sure." then do it, but the whole point is that I want to optimize my time with the agent.


I feel like all agents currently do better if you explicitly end with "Remember to follow AGENTS.md", even if that's automatically injected into the context. Seems the same across all I'm using.


I like this.


Thanks. Please share. :)


I miss Yahoo Pipes every day of my life.


Isn‘t n8n a substitute of sorts?


What about Node-RED?


I was going to ask what makes this better than just using Playwright and this largely answers that question. I will have to try it out and see how it compares.

I haven't really had luck with MCP in general for quite a while though. I have just been using Google Antigravity for most of my vibe coding needs.


Google died when they removed "Do not be evil" from their mission statement and removed 20% time.


Different project, but similar vibe. https://ffstudio.app/


This is awesome, thank you so much for posting it.


If you aren't able to read and understand the study, just use AI.

I asked co-pilot to explain it like you were a puppy dog so you can understand.

Okay, here’s the puppy-level version:

Shot = good. Shot helps keep you from getting sick. Sick = bad. Shot makes hospital visits less. Sometimes hearts get a little grumpy, but that’s super rare. Most pups (kids) are totally fine.

Big idea: Shot helps more than it hurts.


I love this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: