Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Depends. Probably not usually. I've thought about this a bunch and I think the serious "threat" here isn't the agent acting maliciously --- though agents will break out of non-hardened sandboxes! --- but rather them exposing some vulnerability that an actual human attacker exploits.




I'd also add that I just don't like the idea in principle that I should have to trust the agent not to act maliciously. If an agent can run rm -rf / in an extreme edge case, theoretically it could also execute a container escape.

Maybe vanishingly unlikely in practice, but it costs me almost nothing to use a VM just in case. It's not impossible that certain models turn out to be poorly behaved, that attackers successfully execute indirect prompt injection via malicious tutorials targeting coding agents, or that some shadowy figure runs a plausibly deniable attack against me through an LLM API.


This is a genuine concern. But this sounds a bit independent of the execution environment. It could either be containers or VMs.

On a local machine, yeah, I think it's pretty situational. VMs are safer, but in risk management terms the win is sometimes not that significant.

In a multitenant cloud environment, of course, totally different story.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: