Hacker Newsnew | past | comments | ask | show | jobs | submit | abracos's commentslogin

Isn't it extremely difficult problem? It's very easy to game, vouch 1 entity that will invite lots of bad actors

At a technical level it's straightforward. Repo maintainers maintain their own vouch/denouncelists. Your maintainers are assumed to be good actors who can vouch for new contributors. If your maintainers aren't good actors, that's a whole other problem. From reading the docs, you can delegate vouching to newly vouched users, as well, but this isn't a requirement.

The problem is at the social level. People will not want to maintain their own vouch/denounce lists because they're lazy. Which means if this takes off, there will be centrally maintained vouchlists. Which, if you've been on the internet for any amount of time, you can instantly imagine will lead to the formation of cliques and vouchlist drama.


The usual way of solving this is to make the voucher responsible as well if any bad actor is banned. That adds a layer of stake in the game.

A practical example of this can be seen in lobsters invite system, where if too many of the invitee accounts post spam, the inviter is also banned.

And another practical observation is that not many people have Lobsters account or even heard about it due to that (way less than people who heard about HN). Their "solution" is to make newcomers beg for invites in some chat. Guess what would a motivated malicious actor would do any times required and a regular internet user won't bother? Yeah, that.

I think this is the inevitable reality for future FOSS. Github will be degraded, but any real development will be moved behind closed doors and invite only walls.

That's putting weight on the other end of the scale. Why would you want to stake your reputation on an internet stranger based on a few PRs?

You are not supposed to vouch for strangers, system working as intended.

You can't get perfection. The constraints / stakes are softer with what Mitchell is trying to solve i.e. it's not a big deal if one slips through. That being said, it's not hard to denounce the tree of folks rooted at the original bad actor.

> The interesting failure mode isn’t just “one bad actor slips through”, it’s provenance: if you want to > “denounce the tree rooted at a bad actor”, you need to record where a vouch came from (maintainer X, > imported list Y, date, reason), otherwise revocation turns into manual whack-a-mole. > > Keeping the file format minimal is good, but I’d want at least optional provenance in the details field > (or a sidecar) so you can do bulk revocations and audits.

Indeed, it's relatively impossible without ties to real world identity.

> Indeed, it's relatively impossible without ties to real world identity.

I don't think that's true? The goal of vouch isn't to say "@linus_torvalds is Linus Torvalds" it's to say "@linus_torvalds is a legitimate contributor an not an AI slopper/spammer". It's not vouching for their real world identity, or that they're a good person, or that they'll never add malware to their repositories. It's just vouching for the most basic level of "when this person puts out a PR it's not AI slop".


That’s not the point.

Point is: when @lt100, @lt101, … , @lt999 all vouch for something, it’s worthless.


But surely then a maintainer notices what has happened, and resolves the problem?

That's really easy to clean up, if you maintain the tree of trust. If a parent node gets whacked, all the child nodes do, too.

Real world identity isn't sufficient or necessary to solve that problem.

Then you would just un-vouch them? I don't see how its easy to game on that front.

Malicious "enabler" already in the circular vouch system would then vouch for new malicious accounts and then unvouch after those are accepted, hiding the connection. So then someone would need to manually monitor the logs for every state change of all vouch pairs. Fun :)

It’s easy to game systems unless you attach real stakes, like your reputation. You can vouch for anyone, but if you consistently back bad actors your reputation should suffer along with everything you endorsed.

The web badly under-uses reputation and cryptographic content signing. A simple web of trust, where people vouch for others and for content using their private keys, would create a durable public record of what you stand behind. We’ve had the tools for decades but so far people decline to use them properly. They don't see the urgency. AI slop creates the urgency and yet everybody is now wringing their hands on what to do. In my view the answer to that has been kind of obvious for a while: we need a reputation based web of trust.

In an era of AI slop and profit-driven bots, the anonymous web is just broken. Speech without reputational risk is essentially noise. If you have no reputation, the only way to build one is by getting others to stake theirs on you. That's actually nothing new. That's historically how you build reputation with family, friends, neighbors, colleagues, etc. If you misbehave, they turn their backs on you. Why should that work differently on the web?

GitHub actually shows how this might work but it's an incomplete solution. It has many of the necessary building blocks though. Public profiles, track records, signed commits, and real artifacts create credibility that is hard to fake except by generating high quality content over a long time. New accounts deserve caution, and old accounts with lots of low-quality (unvouched for) activity deserve skepticism. This is very tough to game.

Stackoverflow is a case study in what not to do here. It got so flooded by reputation hungry people without one that it got super annoying to use. But that might just be a bad implementation of what otherwise wasn't a bad idea.

Other places that could benefit from this are websites. New domains should have rock bottom reputation. And the link graphs of older websites should tell you all you need to know. Social networks can add the social bias: people you trust vouching for stuff. Mastodon would be perfect for this as an open federated network. Unfortunately they seem to be pushing back on the notion that content should be signed for reasons I never understood.


you can't really build a perfect system, the goal would be to limit bad actors as much as possible.

Is the main goal to see if LLM can do this sort of research and cross-pollination?

No, the goal is documenting the convergence pattern itself. We did use LLMs as research tools — acknowledged in the paper — but the cross-domain analysis and citation mapping are human work.

I'll explain how we got to this point. I had previously mentored my friend, Robin Macomber, in math & physics for several years. Robin Macomber independently discovered a variation of criticality math and asked me to evaluate. After due consideration I recognized a pattern: his work echoed that of Kenneth Wilson's renormalization group theory, which I'd previously studied. I then conducted a detailed survey of all academic fields that touched on criticality (using an LLM!) and found, to my great surprise, that this same math had been independently discovered many times in many domains. So I wrote a paper about it.


See! This is where you're doing it right. Be a person!

Why do people rate "The Left Hand of Darkness" so much? Is it because it was good at the time of writing? All concepts there are very shallow and mainstream now

edit: honest question, don't want to flame


"The Left Hand of Darkness" was published in 1969. I'm a transgender person in my 30s and Le Guin's writing makes me emotional every time I reread it. The ideas about gender and sexuality are more mainstream than they were almost 60 years ago, but the future is not evenly distributed and I think TLHOD would be eye opening for a lot of readers. Le Guin's prose and world building also place her among the best science fiction writers of all time.

I think a lot of Asimov stories fall into the same category. When you shape a genre, looking back it all seems so obvious. I do think Le Guin wrote much better characters than Asimov.

Agree on "I, robot", but foundation series is still very good (probably because it's not really character-focused)

Just to add to this, people say the same about eg citizen Kane being such a classic but without the context of it having genre defining firsts, the film doesn’t stand out as much to a modern viewer.

Asimov wrote characters?!

The core ideas are only mainstream in extremely modern and extremely liberal contexts. I bet the majority of teachers in this country would get shit for assigning this book, even at a college level.

Most people on earth still live in social and political environments where the core thought experiment of “The Left Hand of Darkness” – a human society without fixed male/female sexes – is not just unfamiliar but fundamentally unintuitive or threatening, which implies the book’s work is far from done.

In most countries, law, bureaucracy, language, and daily life remain built on a binary model of “men” and “women,” from ID documents to restrooms to family law. Surveys show that even where support for protecting transgender people from discrimination is relatively high, recognition of nonbinary identities and comfort with nonbinary social roles remains much weaker and highly contested. For a majority of readers shaped by these institutions, a society like Gethen, where nobody is permanently male or female and where gender roles have never crystallized, is not a recognizable extension of their world; it is a radical negation of how their societies are organized.

Globally, anti‑“gender ideology” movements and laws frame challenges to binary gender as dangerous Western imports, and they coordinate across borders from the US to Eastern Europe to parts of Africa and Asia. In places where same‑sex relationships are criminalized or where public discussion of queerness is suppressed, the premise of ambisexual humans would not just be controversial but literally unspeakable in mainstream forums. Even in regions that are relatively accepting of LGBT+ rights, polls show large minorities resistant to full legal and social recognition for trans and nonbinary people, indicating that the novel’s underlying claim – that gender categories themselves are contingent – remains outside everyday common sense.

Many major languages encode gender in grammar so deeply that even translating a gender‑ambiguous society is difficult, nudging readers back toward familiar male/female categories. This structural bias means that, for a majority of non‑English readers, the book’s attempt to erase stable gender can be partially blunted or reframed, underscoring just how far their linguistic and cultural worlds are from Gethen’s premise.

Research on nonbinary people repeatedly highlights “binary normativity”: the assumption that only two genders exist and are socially real, leading to erasure, misgendering, and lack of legal recognition. That everyday experience maps directly onto what Le Guin tried to imagine away on Gethen, showing that the novel’s central question – what happens to society when the binary disappears – still addresses a world that overwhelmingly cannot yet imagine such a disappearance. If most readers still inhabit strongly binary, often anti‑“gender ideology” cultures, then the book’s themes remain provocations from the margins rather than reflections of the mainstream, and its work of unsettling those assumptions is clearly not finished.


The keyword is now.

The first telephone is also pretty bad compared to nowadays phones.


Yes, but now it doesn't make sense to read it anymore right? It reads outdated and there are better books nowadays

I don’t treat literature like tech books. A new novel doesn’t supplant an old one. New expressions of old ideas don’t make old ideas obsolete

There are always better books, but how do you know? Do you take my word for it? “Hey, I’ve read ALL the books, and these new books here are the best … trust me.” Better to read the old books yourself and be sure, right?


Can you list some better books for those of us who liked Le Guin and are interested in what could be better?

I would suggest "In the Mothers’ Land" from Élisabeth Vonarburg. It also talk about alternate society centered around gender. I didnt really liked the left hand of the night, but liked that one. And LeGuin saluted the book apparently too.

it's hard to understand for me what you liked in Le Guin's books, but maybe Children Of Time?

That's a good example of "very shallow and mainstream" writing, but Tchaikovsky isn't in the same league as Le Guin at all.

Because it's a fucking great book.

Books aren't made just from concepts. It's a great exploration of human concepts and interactions.



It's about scale, when you built something as grand as google you don't want to spend time building a garden


For some people that's the case. For others after working on something so large they want to do something small that is wholly theirs.


Could at least try building a bigger garden!


Someone's trying to reproduce it in open https://github.com/kmccleary3301/nested_learning


Surprised this isn't by lucidrains, they usually have the first repro attempts.

This tidbit from a discussion on that repo sounds really interesting:

> You can load a pretrained transformer backbone, freeze it, and train only the HOPE/TITAN/CMS memory pathways.

In principle, you would:

- Freeze the shared transformer spine (embeddings, attention/MLP blocks, layer norms, lm_head) and keep lm_head.weight tied to embed.weight.

- Train only the HOPE/TITAN memory modules (TITAN level, CMS levels, self-modifier projections, inner-optimizer state).

- Treat this like an adapter-style continual-learning finetune: base model provides stable representations; HOPE/CMS learn to adapt/test-time-learn on top.

----

Pretty cool if this works. I'm hopeful more research will go into reusing already trained models (other than freeze existing parts, train the rest) so all that training effort doesn't get lost. Something that can re-use that w/ architecture enhancements will be truly revolutionary.


Cool project, the space is very crowded: https://x.com/JeffDean/status/1991053401061536027 and http://semanticscholar.org/ come to mind


Yea there were several attempts (including ar5iv), and distill.pub is no longer active + Semantic Scholar is PDF-based. None quite made the full use of HTML or have a robust conversion system. Jeff Dean's post is awesome - though using Gemini 3 is compute intensive and may still hallucinate in the end (I'm using a source-based latex to json parser). And the output is still...not very interactive.


For context: The website is called Rosa-Luxemburg-Stiftung. Rosa Luxemburg was a Polish and naturalised-German Marxist theorist and revolutionary. A very biased source


Tot have a certain political angle does not imply a bias by definition


Everyone is biased by definition. Knowing in which direction someone drifts and how strong they move supports your media literacy.


I think the thing that bothers me in the comment is "very biased". If everything is biased, there is not a lot of point in pointing that out.


Absolutely, I flagged this post for this very reason. Far left or right websites are no place to get a reliable account of things.


It's so insane that communism always gets a pass. I feel for the victims of that insane ideology.

https://en.wikipedia.org/wiki/Mass_killings_under_communist_...

> Estimates of individuals killed range from a low of 10–20 million to as high as 148 million.


Post a right-wing source, get flagged in minutes. Post literal Communist rhetoric, get to the front page with lots of excuses in between.


Politically colored ≠ biased


Bias - an inclination of temperament or outlook [1]

Politically colored = an inclination of outlook

[1] https://www.merriam-webster.com/dictionary/bias



how does it compare to automl tools?


TabPFN-2.5 default (one forward pass) matches AutoGluon 1.4 tuned for four-hours. Autogluon is the strongest AutoML including stacking of XGB and cat boost and even includes the previous TabPFNv2.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: