Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

people do care in various ways

- price per thing you use it with matters (a lot)

- making sure that under no circumstances are the involved information leaked (included being trained on) matters a lot in many use cases, while OpenAI does by now have supports that the degree of you being able to enforce it is not enough for some use cases. In some cases this is a hard constraint due to legal regulations.

- geo politics matters, sometimes. Being dependent on a US service is sometimes a no go (using self hosted US software is most times fine, tho). Even if you only operate in the EU.

- it's much easier to domain adapt if the model is source/weight accessible in a reasonable degree, while GPT-4 has a fine tuning API it's much much less powerful a direct consequence of the highly proprietary nature of GPT-4

- a lot of companies are not happy at all if they become highly reliable on a single service which can change at any time in how it acts, the pricing model or it being available in your country at all. So basing your product on a less powerful but in turn replaceable or open source AI can be a good idea, especially if you are based in a country not at best terms with the US.

- do you trust Sam Altman at all? I do not and it seem short sighted to do so. In which case some of the points above become more relevant

- 3.5 level especially in combination with domain adoption can be "good enough" for some use cases



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: