Hacker Newsnew | past | comments | ask | show | jobs | submit | thunderbong's commentslogin


Thank you. Didn't know about this. Very interesting.

https://en.wikipedia.org/wiki/Triffin_dilemma


Looks to me that the issue is with the PR process, not with open-source.

From the article -

> It's gotten so bad, GitHub added a feature to disable Pull Requests entirely. Pull Requests are the fundamental thing that made GitHub popular. And now we'll see that feature closed off in more and more repos.

I don't have a solution for this, I'm pointing to the flaw in the assumption that AI is destroying open-source.


The solution is forking. Make a fork, update it to your heart's content. If it is found to be solid later, perhaps it will be studied and forked itself.

Yeah, been thinking that we should let the LLMs run riot on special AI branches— or heck, maybe Microsoft can buy/create AIGitHub.com.

But that's already true. Github lets people make a fork and have their AI run riot on it. What are you really suggesting if not the status quo?

That we embrace it generally. Even just proposing a naming convention would allow for agents to find the AI-sanctioned branch (or create it) and have at it.

(Maybe some AI agents can collaborate on "AILinux" and we can see how it measures up, ha ha.)


Maintainer could just say "No AI please" and refuse PRs that they judge are probably AI. The AI operator can figure out how to make a fork if that's what they want. But they probably don't want that, so no point anybody else creating a system that nobody wants and nobody will use.

Now every contributor has a fork. That's bad for consumers. Forks should be temporary.

For me this was telling -

> Coding agents are designed to be accommodating, it doesn’t push back against prompts since it neither has the authority nor the context to do so. It may ask for clarifications upon what was specified, but it won’t say “wait, have you considered doing X instead?” A human developer would, or at least, they’d raise a flag. An LLM produces plausible output and moves on.

> This trait may be desirable as a virtual assistant, but it makes for a bad engineering teammate. The willingness to engage in productive conflict is part and parcel to good engineering: it helps broaden the search in the design space of ideas.

Whenever non-technical people ask me about LLMs, I tell them this - The goal of an LLM is not to give you correct answers. The goal of an LLM is to continue the conversation.


> The goal of an LLM is to continue the conversation.

It’s even simpler. The goal of an LLM is to generate the next token.

That’s reductive but worth considering. An LLM doesn’t have inherent goals and you aren’t privy to how it was post-trained or what on, so you can’t assume it’ll behave in any particular way.


If you select Search > Advanced from the menu, you get a window where you can enter the content to search for. This is available in the normal as well as the alpha version.


They might also be doing it for the sake of a better future for their children, not just for themselves.


There was a recent Show HN [0] on this for Android, showcasing DoNotNotify [1] -

[0]: https://news.ycombinator.com/item?id=46499646

[1]: https://donotnotify.com/


The main tweet the article is referring to

https://x.com/BonesawMD/status/2010343792126128535


Thanks for this, I assumed there would be some more rigor behind this but it hardly seems credible, it relies mostly on anecdotes and "common sense".


The first paragraph of the article -

> Last week Cursor published Scaling long-running autonomous coding, an article describing their research efforts into coordinating large numbers of autonomous coding agents. One of the projects mentioned in the article was FastRender, a web browser they built from scratch using their agent swarms. I wanted to learn more so I asked Wilson Lin, the engineer behind FastRender, if we could record a conversation about the project. That 47 minute video is now available on YouTube. I’ve included some of the highlights below.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: