Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If I'm understanding the quoted interview correctly, Google is talking about AI generated spam - like when you ask GPT-3 to write you an article about XYZ topic and it spits out 500 words of well-written, plausible sounding gibberish - that you throw up on your website to try to rank in the search engines.

However, they seem to be leaving open the possibility of AI-assisted writing, where a human comes up with the information and guides the AI as it puts that information into words.

> From our recommendation we still see it as automatically generated content. I think over time maybe this is something that will evolve in that it will become more of a tool for people. Kind of like you would use machine translation as a basis for creating a translated version of a website, but you still work through it manually.

> And maybe over time these AI tools will evolve in that direction that you use them to be more efficient in your writing or to make sure that you’re writing in a proper way like the spelling and the grammar checking tools, which are also based on machine learning. But I don’t know what the future brings there.

In my opinion GPT-3 has already reached the point of being useful for this purpose - there are several GPT-3 based apps that do exactly what he's describing.



A friend (university science lab technician, in a field that didn't pay like CS) would make approx. $2/hour extra money in the evenings, from some Web site that directed her what topics to write articles about. Google the topic, rapidly skim, distill/rephrase it to a certain word count. (I suggested she could make a lot more working in a cafe, but she was physically exhausted from being on feet all day in lab, with lots of moving around heavy objects.)

I guessed it was used for filler content for SEO sites.

A question is whether that company would save money by using "AI" text generation, when they were paying real humans so little, for arguably higher quality.


> A question is whether that company would save money by using "AI" text generation, when they were paying real humans so little, for arguably higher quality.

You can power a GPU for a few cents of electricity per day, so I think the answer is a resounding yes.


I'm glad that I'm not one of the human moderators having to read GPT-3 gibberish on a daily basis.


I was a moderator for a large Internet forum. GPT-3 is far more coherent than a lot of humans.


My girlfriend was struggling with an assignment for university yesterday, so I put the questions into a GPT-3 site and the essays it wrote were better than either of us could have written by hand...


Mind sharing the site?


It's called YouWrite, it's from the guys that did the you.com search engine:

https://you.com/search?q=how%20to%20write%20well&fromSearchB...


I put in a moderately specialist topic I'm reasonably expert in. What it wrote was highly convincing. While I wouldn't be particularly impressed, I absolutely couldn't distinguish it from a mediocre human. It's really quite terrifying to think of the internet filling up with this garbage.


Is it garbage if it writes better than most of the humans from the developing world that are currently exploited to churn out meaningless blog spam?

Also, it would have worked perfectly for my girl's homework assignment, but she has morals and decided she couldn't ethically submit it. She did use the output for ideas on what to write, though.


It writes very convincing pop-sci articles about high energy photons and the like. I see no way how even a proper AI can distinguish it from similar articles written by journalists.


I asked GPT-3 for sing requests and it was like: “The first caller requested _____ by ______. The second caller requested _____ by _____. The third caller…”

It didn’t seem to know any musical artists or songs. It was weird. When I asked for song suggestions the best it could do is write bad poetry.


If it's simply a quality issue, then at the point that AI generated content becomes better than human generated content, will Google ban human generated content?


If captchas are any indicator, AIs will soon be better at convincing Google their content is human generated than humans are.

Note that, to win, the machines just need to produce approved content at a faster rate than the humans. If some AI can spew text at a measly 1GB/s, and Google blocks 99% of it, 10MB/s will still getting through. In 2016, there were ~4.6B pages on the internet. If each page contains 10KB of text, then the AI in my example produces 1000 pages per second. There are 31 million seconds per year, which means it would churn out 31B pages per year, or the entire 2016 internet every few months.

My conclusion? Internet search is screwed. Maybe people will start paying for curation.


Very useful insight / analogy. Thanks for sharing


At that point they'll probably just have their own AI that generates the perfect response to any search query, and return it as the top result.


After getting this result https://i.redd.it/4wrj2xpp75s81.png a couple days ago, I am not confident that will be any time soon.


Then the AI will build its own search engine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: