Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find the biggest crime with LLMs to be the size of the problems we feed them.

Every time I start getting lazy and asking ChatGPT things like "write me a singleton that tracks progression for XYZ in a unity project", I wind up with a big hole where some deeper understanding of my problem should be. A better approach is to prompt it like "Show me a few ways to persist progression-like data in a unity project. Compare and contrast them".

Having an LLM development policy where you ~blindly accept a solution simply because it works is like an HOV lane to hell. It is very tempting to do this when you are tired or in a rush. I do it all the time.



It all depends on what you do with it - I see the first prompt just as a slightly different starting place than the second one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: