that might be a concern in the eu, due to sui generis database protection laws, but they probably aren't copyrightable in the us under the feist doctrine, so it's probably the license that's illicit
I don't think it has been conclusively decided that model weights are "just" a database. On the contrary, they are likely a derived work from millions of different sources for which the creator doesn't have a license or for which the licenses have requirements (like CC). In most European countries, as I understand there is no general fair use doctrine but only specific exceptions for citations, satire, libraries etc.. Regarding the output, people might try to claim that the "height of creation" is small, e.g. something is obvious and not copyrightable, but that is going to fail because of the immense resources needed to train such a model.
So in Europe the problem is not the copyrightability of the model which is certainly a protected work in its own right, it is the copyrightedness of the sources. I think this is one of the reasons why people release their models as "open source" (which is a misnomer), because they could never uphold a copyright claim in court due to the muddy sources.
I work in software for the educational sector, and we frequently get requests from people who want to use ChatGPT etc., but can't, and one of their greatest concerns is the provenience of the training data. What we are going to need is either a LLM trained on properly licensed sources (unlikely), or a new law that states that processing copyrighted material into a LLM is legal.
i'm pretty sure there isn't any case law specifically about large language model weights and biases, so anything we say at this point is pretty uncertain