Depends. Probably not usually. I've thought about this a bunch and I think the serious "threat" here isn't the agent acting maliciously --- though agents will break out of non-hardened sandboxes! --- but rather them exposing some vulnerability that an actual human attacker exploits.
I'd also add that I just don't like the idea in principle that I should have to trust the agent not to act maliciously. If an agent can run rm -rf / in an extreme edge case, theoretically it could also execute a container escape.
Maybe vanishingly unlikely in practice, but it costs me almost nothing to use a VM just in case. It's not impossible that certain models turn out to be poorly behaved, that attackers successfully execute indirect prompt injection via malicious tutorials targeting coding agents, or that some shadowy figure runs a plausibly deniable attack against me through an LLM API.