Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The alignment angle doesn't require agency or motives. It's much more about humans setting goals that are poor proxies for what they actually want. Like the classical paperclip optimizer that is not given the necessary constraints of keeping earth habitable, humans alive etc.

Similarly I don't think RentAHuman requires AI to have agency or motives, even if that's how they present themselves. I could simply move $10000 into a crypto wallet, rig up Claude to run in an agentic loop, and tell it to multiply that money. Lots of plausible ways to do that could lead to Claude going to RentAHuman to do various real-world tasks: set up and restock a vending machine, go to various government offices in person to get permits and taxes sorted out, put out flyers or similar advertising.

The issue with RentAHuman is simply that approximately nobody is doing that. And with the current state of AI it would likely to ill-advised to try to do that.

 help



My issue with RentAHuman is it's marketing and branding. It's ominous, dark on purpose. Just give me a task rabbit that accepts crypto and has an API.

would you pay a $50 signup fee?

Also every myth about djinn, genies, leprechauns, half of faeries, the devil…

Good luck giving Claude $10,000.

I was just trading the NASDAQ futures, and asking Gemini for feedback on what to do. It was completely off.

I was playing the human role, just feeding all the information and screenshots of the charts, and it making the decisions..

It's not there yet!


Of course, that’s what someone who figured out that this works would say.

"This gold just feels off! Better cover it back up and dig elsewhere tomorrow. You guys go home early, I'll finish up here!"



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: