Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was just today trying to fix some errors in an old Linux kernel version 3.x.x .dts file for some old hardware, so that I could get a modern kernel to use it. ChatGPT seemed very helpful at first - and I was super impressed. I thought it was giving me great insight into why the old files were now producing errors … except the changes it proposed never actually fixed anything.

Eventually I read some actual documentation and realised it was just spouting very plausible sounding nonsense - and confident at it!

The same thing happened a year or so ago when I tried to get a much older ChatGPT to help me with with USB protocol problems in some microcontroller code. It just hallucinated APIs and protocol features that didn’t actually exist. I really expected more by now - but I now suspect it’ll just never be good at niche tasks (and these two things are not particularly niche compared to some).



Eventually I read some actual documentation...

For the best of both worlds make the LLM first 'read' the documentation, and then ask for help. Make a huge difference in the quality and relevance of the answers you get.


And hope the docs aren't too large. LLMs tend to confabulate more with longer contexts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: