Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was under the impression that dogfooding was a good practice for quality control that improves your products. Not anymore, apparently. Now we make fun of them for doing that.


Of course the team developing these AI models should be dogfooding them. Telling the rest of the company they have to use them is pushing tools without understanding contexts.

If AI is that much more amazing, just measure employees on the usual metrics and those using AI should be so obviously far ahead that you can get rid of the others. Measuring "usage of AI" is a garbage metric that will not achieve anything good.


It could be they need to force usage to generate more quality training data over the MS codebase. You'll get a huge amount of prompt/llm_answer/human_correction instances if all MS programmers are forced to produce these e en if initially it does not increase or even decreases their productivity.

We saw the same in the early offshoring days. Firms forcing this on their project leads, knowingly expecting, accepting and swallowing the productivity losses to learn how to set this up for the promise of future cost savings.


> Not anymore, apparently. Now we make fun of them for doing that.

What ? do things when it makes sense, not as a top-down mandate.

Somehow, ceos/management have gotten AI religion, are now thrusting it top-down. This is like management dictating the tech-stack to use when it is not in their competency area.


Not only is it good for the product, but in this case we are talking about Gemini Pro, one of the top models in the world.

It's good for the engineers involved too. In 5 years, developers not using LLMs for code generation are going to be unemployable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: