I don't know about this PR but I suggest that people have wasted so much time on sloppy generated PRs that they have had to decide to ignore them to have any time to deal with real people and real PRs that aren't slop.
If we are to consider them truly intelligent then they have to have responsibility for what they do. If they're just probability machines then they're the responsibility of their owners.
If they're children then their parents, i.e. creators, are responsible.
They aren't truly intelligent so we shouldn't consider them to be. They're a system that, for a given stream of input tokens predicts the most likely next output token. The fact that their training dataset is so big makes them very good at predicting the next token in all sorts of contexts (that it has training data for anyway), but that's not the same as "thinking". And that's why they get so bizarelly of the rails if your input context is some wild prompt that has them play acting
We aren't, and intelligence isn't the question, actual agency (in the psychological sense) is. If you install some fancy model but don't give it anything to do, it won't do anything. If you put a human in an empty house somewhere, they will start exploring their options. And mind you, we're not purely driven by survival either; neither art nor culture would exist if that were the case.
I agree because I'm trying to point out the the over-enthusiasts that if they really reached intelligence it has lots of consequences that they probably don't want. Hence they shouldn't be too eager to declare that the future has arrived.
I'm not sure that a minimal kind of agency is super complicated BTW. Perhaps it's just connecting the LLM into a loop that processes its sensory input to make output continuously? But you're right that it lacks desire, needs etc so its thinking is undirected without a human.
I think that a comparison with Engineering is not that helpful for software.
Software has 0 construction cost but that it does have is extremely complicated behavior.
Take a bridge for example: the use case is being able to walk or drive or ride a train across it. It essentially proves a surface to travel on. The complications of providing this depend on the terrain, length etc etc and are not to be dismissed but there's relatively little doubt about what a bridge is expected to do. We don't iterate bridge design because we don't need to know much from the users of the bridge: does it fulfill their needs, is it "easy to use" etc AND because construction of a bridge is extremely expensive so iteration is also incredibly costly. We do, however, not build all bridges the same and people develop styles over time which they repeat for successive bridges and we iterate that way.
In essence, cycling is about discovering more accurately what is wanted because it is so often the case that we don't know precisely at the start. It allows one to be far more efficient because one changes the requirements as one learns.
Big upfront designs are obviously based on big upfront knowledge which nobody has.
When they turn out to be based on false assumptions of simplicity the fallout is that the whole thing can't go forward because of one of the details.
Evolutionary systems at least always work to some degree even if you can look after the fact and decide that there's a lot of redundancy. Ideally you would then refactor the most troublesome pieces.
Big upfront design always tries to design too many things that should be implementation details. Meanwhile the things that are really important are often ignored - because you don't even realize they are important at the time.
Commercial interests. Linux has benefited greatly from companies adopting it and paying developers but at the same time there has been a price to pay and this kind of thing is it.
Since it's all open source, I think we're reasonably ok because we don't HAVE to do what the commercial distros chose to do.
The problem is if we let it become too difficult. Personally I think a thing like DBUS is needed but dbus itself is undesirable as it adds another IPC type and has no connection to the filesystem or the BSD socket interface or any of the other standard ways that interfaces can be discovered and used. It has a network effect that is not easy to avoid without accepting degradation in the UI.
The more crap we end up accepting the more difficult it becomes to be a lone developer and the more the whole system will turn towards commercial interests and away from what it started out as.
That's the way open source works. The people that think there's a point go and fork and those that don't stay put.
Linux distros have become extremely complicated IMO. Systemd is not the worst example of this - the packaging systems are hard, things like SeLinux are very annoying. The stability is because companies have spent to make it so. There are enterprise features all over the place etc. This just isn't what all of us necessarily want. I think there's room for distros which can be understood at a technical level - which are composed of units that can be easily replaced that have defined functions.
reply