Comparing "rigid formal grammar-based models" (whatever that might actually mean for now) to machine learning is like comparing apples to bananas. The former one is a rigorous syntactical formalization, aimed at being readable by machine and humans alike. The latter one is a learned interpolation of a probability distribution function. I do not see a single way to compare these two "things". Nevertheless, I may guess, what you actually are trying to say: Annotating data by hand (the syntax is completely irrelevant) is inferior to annotating data by machine learning. And this claim is at least debatable and domain-dependent. There are domains where even a 3% false-positive rate translates to "death of a human being in 3 out of 100 identified cases", and there are domains where it's to much work to formalize every bits and pieces of the domain and extracting (i.e. learning) knowledge is a feasible endeavor. I have experience in both fields, and I dare to say, that extracting concepts and relations out of text in a way that it can be further processed and used for some kind of decision process is way more complicated than you might imagine, and GPT-3 et al. do not achieve that.