I’ll admit that this comment struck me as strange too (and the thought it might be generated also crossed my mind) but I try to keep an open mind and after considering it, I think the point about white supremacy is worth considering. Note that the writer didn’t say “you’re a racist” but rather framed it rather neutrally: “This can be seen as a form of white supremacy, as it can exclude voices and perspectives from people who may not have access”.
The question this raises is, who has access to state-of-the-art AI tech, and who does not, and might these groups have a racial dimension? Objectively, surely they do? Don’t lightweight models therefore serve an important democratizing purpose, in that they make this tech accessible to (for instance) people in developing nations who are overwhelmingly black and brown?
Oh I definitely agree that there are multiple levels of AI research that are valuable. Huge supporter of open source, and not meaning to talk down to anyone working on AI projects.
It's just that at the moment I'm finding the open source LLM community hard to contextualize from an outside perspective. Maybe it's because things are moving so fast (probably a good thing).
I just know that personally, I'm not going to be exploring any projects until I know they're near or exceeding GPT-4 performance level. And it's hard to develop an interest in anything else other than GPT-4 when comparison is so tough to begin with.
I'd suggest reading the recently leaked Google memo for some context about why open source LLMs are important (and are disruptive from the perspective of a large company). It gives a good insight into why closed source models like GPT-4 might be overtaken by open source even if they can't directly compete at the moment.
Typical reasons are highly specialised models that are cheap and fast to train, lack of censorship, lack of API and usage restrictions, lightweight variants and so on. The reason there's a lot of excitement right now is indeed how fast the space is moving.