Google's system is like any other optimizer, where you have a scoring function, and you keep altering the function's inputs to make the scoring function return a big number.
The difference here is the function's inputs are code instead of numbers, which makes LLMs useful because LLMs are good at altering code. So the LLM will try different candidate solutions, then Google's system will keep working on the good ones and throw away the bad ones (colloquially, "branch is cut").
Exactly, he even mentioned that it's a variant of traditional optimization tool so it's not surprising to see cutting-plane methods and when the structure allows; benders decomposition
The difference here is the function's inputs are code instead of numbers, which makes LLMs useful because LLMs are good at altering code. So the LLM will try different candidate solutions, then Google's system will keep working on the good ones and throw away the bad ones (colloquially, "branch is cut").