Because publishing (potentially cherry picked - this is privately funded research after all) evidence their models might be dangerous conveniently implies they are very powerful, without actually having to prove the latter.
What struck me was the phrase "[...] trying not to hallucinate in meetings or machine learning models". This sentence is super incoherent and tells me that whoever wrote this piece of text doesn't have a clear understanding of the subject matter.
I don't care either wheter this is from an LLM or a real person who just doesn't know their stuff, but it tells me to not expect any meaningful insights from it and that engaging with it is probably a waste of my time.