Looks to me that the issue is with the PR process, not with open-source.
From the article -
> It's gotten so bad, GitHub added a feature to disable Pull Requests entirely. Pull Requests are the fundamental thing that made GitHub popular. And now we'll see that feature closed off in more and more repos.
I don't have a solution for this, I'm pointing to the flaw in the assumption that AI is destroying open-source.
The solution is forking. Make a fork, update it to your heart's content. If it is found to be solid later, perhaps it will be studied and forked itself.
That we embrace it generally. Even just proposing a naming convention would allow for agents to find the AI-sanctioned branch (or create it) and have at it.
(Maybe some AI agents can collaborate on "AILinux" and we can see how it measures up, ha ha.)
Maintainer could just say "No AI please" and refuse PRs that they judge are probably AI. The AI operator can figure out how to make a fork if that's what they want. But they probably don't want that, so no point anybody else creating a system that nobody wants and nobody will use.
> Coding agents are designed to be accommodating, it doesn’t push back against prompts since it neither has the authority nor the context to do so. It may ask for clarifications upon what was specified, but it won’t say “wait, have you considered doing X instead?” A human developer would, or at least, they’d raise a flag. An LLM produces plausible output and moves on.
> This trait may be desirable as a virtual assistant, but it makes for a bad engineering teammate. The willingness to engage in productive conflict is part and parcel to good engineering: it helps broaden the search in the design space of ideas.
Whenever non-technical people ask me about LLMs, I tell them this -
The goal of an LLM is not to give you correct answers. The goal of an LLM is to continue the conversation.
> The goal of an LLM is to continue the conversation.
It’s even simpler. The goal of an LLM is to generate the next token.
That’s reductive but worth considering. An LLM doesn’t have inherent goals and you aren’t privy to how it was post-trained or what on, so you can’t assume it’ll behave in any particular way.
If you select Search > Advanced from the menu, you get a window where you can enter the content to search for. This is available in the normal as well as the alpha version.
> Last week Cursor published Scaling long-running autonomous coding, an article describing their research efforts into coordinating large numbers of autonomous coding agents. One of the projects mentioned in the article was FastRender, a web browser they built from scratch using their agent swarms. I wanted to learn more so I asked Wilson Lin, the engineer behind FastRender, if we could record a conversation about the project. That 47 minute video is now available on YouTube. I’ve included some of the highlights below.
reply