The quality of generated code does not matter. The problem is when it breaks 2 AM and you're burning thousands of dollars every minutes. You don't own the code that you don't understand, but unfortunately that does not mean you don't own the responsibility as well. Good luck on writing the postmortem, your boss will have lots of question for you.
AI can help you understand code faster than without AI. It allows me to investigate problems that I have little context in and be able to write fixes effectively.
This is not very surprising. I've always thought that it's more of correlation than causation. If you're a good problem solver, then there is a good chance that you are probably good at both college admission and software engineering. So companies have been using it as their proxy for hiring because... why not. I'm not saying college curricula are useless, but this dependency on (imperfect) correlation might have caused significant opportunity costs for talent acquisition and now companies are slowly acknowledging it.
I just hope them to provide an option to get rid of all those predictive models and just use a static, consistent layout. At least I can blame myself if my typo is from my own mistake.
This looks interesting. This project has some novelty as a research and actually delivered a promising PoC but as a product it implies that its training was severely constrained by computing resources, which correlates well with the report that their CFO overruled CEO's decision on ML infra investment.
JG's recent departure and follow up massive reorg to get rid of AI, rumors on Tim's upcoming step down in early 2026... All of these signals indicate that those non-ML folks have won corporate politics to reduce the in-house AI efforts.
I suppose this was a part of serious efforts to deliver in-house models but the directional changes on AI strategy made them to give up. What a shame... At least the approach itself seem interesting and hope others to take a look and use it for building something useful.
Don't forget to mention automatic enrollment of your production group into access-on-demand. Any minor access on the production now requires the group manager's approval. I had a fun time with some production fire where only director level folks can approve the access. Even funnier thing is that this "refactor" was done without any prior notice.
Basically it all boils down to budget. Those engineers knew this is a problem and wanted to fix that but that costs some money. And you know, bean counters in the treasury are basically like, "well it works well, why do we need that fix?" and the last conservative govt. was in a full spending cut mode. You know what happened there.
Internally, TPU is much cheaper for the same amount of compute compared to GPU, so I don't see much reasons why they need to use GPU. Probably >99% of compute budgets are spent on TPU. It might be true if you say these <1% still counts, but I guess it is pretty safe to say all of its meaningful production workload are running on TPU. It is simply too expensive to run a meaningful amount of compute on non-TPU.
Just to clarify, TPU has been in development for a decade and it is quite mature these days. Years ago internal consumers had to accept the CPU/GPU and TPU duality but I think this case is getting rarer. I guess this is even more true for DeepMind since itself owns a ML infra team. They likely be able to fix most of the issues with a high priority.