Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

User-agent: GPTBot Disallow: /


ATMO you shouldn't have to maintain knowledge of what kind of crawler bot exist and having to maintain deny list. It should be the opposite, only expressedly allowed content should be crawled by mainaining allow lists.


You can do the opposite since the inception of robots.txt: User-agent: * Disallow: / and then whitelist google bot and whatnot. Most of the web is already configured this way. Just check robots.txt of any major website, e.g. https://twitter.com/robots.txt


The Allow: directive was an extension to robots.txt added later.


That was my gut reaction too, but presumably unless it becomes regulated, at least some competitors to OpenAI won't respect any robots.txt and thus any open content might be training data.


User-Agent: <new technology category>Bot




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: