The good news is it's also the golden age of static analysis right now. There's open source and commercial tools that find most of the problems. Some are scaling up on codebases the size of the Linux kernel.
So, not only can you do that: it's standard practice for some developers already in and out of commercial space. Mostly commercial developers doing it that I see, though.
It is dynamic analysis. I don't think it's that reason. Both Saturn and Facebook's tool scaled analysis to the size of the Linux kernel. I think most developers and companies aren't using or pushing them like they could for cultural reasons. Look at the number of people that used sanitizers vs low-F.P. static analyzers even if free. The latter would've gotten them results sooner. It wasn't empirical evidence or effectiveness that led to that decision. Human factors trump technical factors in adoption decisions most of the time.
Note: Linking to Saturn since it's old work by small team. I theorized big investment could achieve even more. Facebook's acquisition and continued investment with excellent results proved it.
If tech and money is the limit, tell me why Google hasn't straight-up bought the companies behind RV-Match and Mayhem to turn their tech on all their code plus open-source dependencies. Even if over-priced, it might actually turn out cheaper than all the bug bounties adding up. Maybe re-sell it cheap as a service Amazon-style. The precedent is that Facebook bought one that built a scalable tool they're applying to their codebase. Then, they were nice enough to open-source it. Google, Microsoft, Apple, etc could get one, too.
What's Google's excuse? They sure aren't broke or dumb. Gotta be cultural. Likewise, Microsoft built lots of internal tools. They do use at least two static analyzers: one for drivers, one for software. I heard they even ship 2nd one in VS. They don't use most of the MS Research tools, though. The reason is cultural: Operations side knows they'll keep making money regardless of crap security.
Far as FOSS developers, most don't care about security. Those that do tend to follow both what takes little effort (naturally) and whats trending. The trending topics are sanitizers, fuzzers, and reproducible builds. Hence, you see them constantly instead of people learning how to build and scale program analyzers, provers, etc with much better track record. Note that I'm not against the others being built to complement the better methods. I just think, if this is rational vs emotional, you'd see most volunteer and corporate effort going to what has had highest payoff with most automation so far. For a positive example, a slice of academics are going all-in on adaptive fuzzers for that reason, too. They're getting great results like Eclipser.
I know a bunch of the people at Google who are responsible for program analysis. The idea that they haven't bought some company doing symbolic execution because of some internal vendetta against static analysis is just ridiculous.
I also find it weird that you reference Saturn, since Alex Aiken hasn't been driving heavy SAT stuff for a while. Clark Barrett is at Stanford now and is doing more things related to what you want. And... oh wait he spent a few years at Google at few years ago.
Fuzzing plus sanitizers work like mad. They aren't magic and interpocedural static analysis provides value too. But your claim that Google is ignoring these techniques and doing so because of cultural idiocy just isn't correct.
I'm saying they didn't care enough to do the kind of investment others were doing which would've solved lots of their problems.
The article also indicates they couldn't get developers to do their job of dealing with the alerts despite false positives. If Coverity's numbers are right, there's over a thousand organizations whose managers did it better.
Since they didn't address it, Google would be about the best company to acquire expensive tech like Mayhem that finds and fixes bugs so their developers can keep ignoring them. Alternatively, start double teaming the problem investing in FB's tool, too, moving in best advances from SV-COMP winners.
I mean, their size, talent, and finances don't go together with results delivered (or not) in static analysis. They could do a lot more with better payoff.
EDIT: Forgot to mention I referenced Saturn because parent said the methods couldn't scale to the Linux kernel. And Saturn was used on the Linux kernel over a decade ago. A few scale big now.
You keep missing my point. I keep giving examples of large-scale, catch-about-everything systems. I'm not talking merely has a few teams on something. This is Google, not a mid-sized business. You win if they already have their own version of Mayhem that fixes their bugs for them. Otherwise, they're behind Facebook in terms of financial effort they'll put in to get great capabilities. I'm also going to assume they're playing catch-up to Infer unless they've released their analyzers at least for peer review.
The infer team is only like 15 people, last I checked. That's certainly not more than "a few teams on something". Also, separation logic and symbolic/concolic execution are such wildly different approaches that it seems odd to pivot to Infer here.
I obviously won't be able to convince you since they aren't publishing at the same rate as facebook. So you'll just have to take my word that Google doesn't have a vendetta against static analysis.
"The infer team is only like 15 people, last I checked."
The Infer team is claiming to get results on C/C++ with both sequential and concurrency errors via a tool they open sourced. I value scalable results over theories, team counts, etc. Does Google have a tool like that which we can verify by using it ourselves? Even with restrictions on non-commercial use? Anything other than their word they're great?
"that it seems odd to pivot to Infer here."
"So you'll just have to take my word that Google doesn't have a vendetta against static analysis. "
You really messed up on 2nd sentence since I linked an article on Google's static analysis work. I told people they're doing it. I mentioned Infer was Facebook pouring a lot of money into a top-notch, independent team to get lots of result. I mentioned Google could do that for companies like that behind Mayhem that built the exact kind of stuff they seem to want in their published paper. They could do it many times over. If they did it, I haven't seen it posted even once. They don't care that much.
Your claims about team size and "vendetta against static analysis" are misdirection used to defend Google instead of explain why they don't buy and expand tech like Infer and Mayhem. And heck, they can use Infer for free. My theory is something along the lines of company culture.
So, not only can you do that: it's standard practice for some developers already in and out of commercial space. Mostly commercial developers doing it that I see, though.