Hacker Newsnew | past | comments | ask | show | jobs | submit | jithinraj's commentslogin

Such a creative solution to what might seem like a small, but really important, problem! I love how precise and responsive the blur is.

Thanks so much for making Posturr. It definitely deserves more recognition!


This is a super creative and fun project.


Does the job well!


Web Bot Auth solves authentication (“who is this bot?”) but not authorization/usage control. We still need a machine-readable policy layer so sites can express “what this bot may do, under which terms” (purpose limits, retention, attribution, optional pricing) at a well-known path, robots.txt-like, but enforceable via signatures.

A practical flow:

1. Bot self-identifies (Web Bot Auth)

2. Fetch policy

3. Accept terms or negotiate (HTTP 402 exists)

4. Present a signed receipt proving consent/payment

5. Origin/CDN verifies receipt and grants access

That keeps things decentralized: identity is transport; policy stays with the site; receipts provide auditability, no single gatekeeper required. There’s ongoing work in this direction (e.g., PEAC using /.well-known/peac.txt) that aims to pair Web Bot Auth with site-controlled terms and verifiable receipts.

Disclosure: I work on PEAC, but the pattern applies regardless of implementation.


Thanks for the thoughtful feedback, fair points all around. Let me clarify:

Who’s driving this: PEAC is open-source (Apache 2.0), started by a small group of developers who care about web standards, but it’s meant to be community-stewarded from the start. No one “owns” it; anyone can contribute, pilot, or join the working group (GitHub or the form). The real goal is collective evolution.

Business model: There isn’t one. It’s pure OSS, free to use or adapt, no strings or monetization. Just focused on enabling fair consent and attribution for everyone.

On “infrastructure”: Good catch. I meant the application layer (CDNs like Cloudflare, content platforms), not lower networking layers. This is about access and content rules, extending things like robots.txt.

On "X”: X is just one option. GitHub issues, HN, email, or the working group form all work (links in repo/blog).

If you or anyone here has thoughts or critiques, I’d love to hear. How would you design the ideal solution?


Major web disputes like the Perplexity/Cloudflare clash over AI crawlers, OpenAI’s lawsuit with NYT, or Anthropic’s Claude facing publisher data concerns show that the “rules of access” for the web are more contested than ever.

Should these rules be set by individual publishers, infrastructure providers, AI companies, open standards, or by community consensus?

What would your ideal solution look like? Who should decide? What’s broken today?

Context:

• Perplexity: https://www.perplexity.ai/hub/blog/agents-or-bots-making-sen...

• Cloudflare: https://blog.cloudflare.com/perplexity-is-using-stealth-unde...


This format is awesome and helps eliminate the fear of starting by making learning shortcuts easy and interesting.


Interesting, what's the delta like?

Thanks for trying. :)


Surprisingly, Claude has more fans! Glad you find it useful! We will be making changes to the UI. Many are willing to pay, but we don't know how much to charge for it yet! What do you think?

Sorry about the ChatGPT4 results, will restore it. Thank you for trying.


Update: PaLM2(BARD) is up and running now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: