They should hire more humans to do a proper job and provide actual customer service, using their excessive profits. The AI/automation stuff is just cost cutting and its lowering the quality of service.
No one's saying not to have algorithms involved at all. Humans don't have to review all of the content, just the stuff the algorithms are unsure about, as well as takedown/review requests.
The latter is more important, I think: there is no reasonable way for a bad actor to set out to spam the REVIEWING of takedown requests. You could even make spamming takedowns more likely to trigger human review. So you can target a person on YouTube, you can try to do things to harass them, but it's not really feasible to try and attack the entire YouTube 'takedown review' process just to protect your attack on an individual.
How could this type of blackmail happen if someone was already reviewing the videos manually? Takedowns being automated is fundamentally at odds with free speech.
YT is legally required to accept all takedowns and takedown counter-notices to reduce liability. Not taking down something in lieu of a DMCA takedown request would put YT at risk of a lawsuit if the submitter does own said copyright.
Content ID, on the other hand, is a problem of their own making.
A sufficiently poorly done job is indistinguishable from nothing being done at all.
It's possible that it's impossible to do any better than they currently are, but as we don't have access to all of the data that they do, we cannot make that determination.
10 billion is a lot. And that's 10 billion that is not making them any more money.
From google's perspective, none of these blackmail or customer support issue is an actual issue, because you, the customer, is gonna use google's services regardless. The business case for paying support personnel is non-existent. Even if you're a paying customer, the overhead of hiring more personnel is not an effective use of capital - better spent on R&D and other scalable revenue generating options.
The reality is that watching all the videos all the way through is an extreme strawman of the actual amount they would need to do to have competent moderation, and yet even that strawman is within their budget!
Even watching every video that hits 1000 views would be a huge overkill, and that would already cut your budget by a factor of multiple thousands.
A more relevant metric might be how many channels are subjected to community guideline or copyright strikes per day. Hiring humans to individual examine each of these circumstances is probably far more feasible than hiring humans to individually examine every single video.