Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

GPT doesn't work that way. Any such shell would have to send a GPT request for every single keystroke, and you'd hit the API ratelimit very quickly. It would also be very expensive.


Maybe a solid use case for local models are they're optimized for consumer hardware-- someone clever could preserve as much context as possible between requests


Well, eventually, better models will be made and they can be used locally




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: