Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLM inferencing is race to the bottom but the service layers on top isn’t. People always pay much more for convenience, those are the thing OpenAI focuses on and is harder to replicate


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: