Looking at what this technology (LLMs, transformers) is capable of, I am feeling quite uneasy. I mean, this is the holy grail of computing that we have always aimed for - an actual general purpose algorithm. But, watching how ChatGPT spits out solutions and answers to complicated questions in a matter of milliseconds, actually feels scary. Yes, it is amazing - but also scary. And this is just the beginning ..
>What is the etymology of the Swahili word for trapezoid?
>The Swahili word for trapezoid is "kisagano." It is derived from the Arabic root words qisas al-gana, which translate to "equal sides."
Instantly. I mean, on one hand, I'm sure I could have found this eventually, with multiple searches, maybe. It's a little unnerving that it had this instantly.
But maybe that isn't even right!? There is a Swahili word for trapazoid that is almost an english cognate (British or Afrikaans I suppose). Do they use "kisagano"? Is it of Arabic origin? I have no idea! I suppose I could use this as a starting point to check.
I'm not worried about some silly skynet AI take over. I'm more worried that we become reliant (like we are on search) to something that just loops back garbage. And using it as a tool that amplifies an existing echo chamber and media narrative.
Most of us know the issues with Wikipedia and how people will trust it blindly. I imagine this becoming a worse version. I had a "conversation" about a high profile death and court case - the version of the story "just happens" to be identical to a mainstream media narrative that was eventually proven to be misleading. A very strong liberal bias to the initial reporting, and not the facts that came out later. It's like they gave way way more weight to the initial reporting, which makes sense, because that is also what people do too.
This can be attributed to the model being used in "closed book" mode. If you connect it to Google and Python REPL it will become grounded, able to provide references and exact.
DeepMind RETRO is a model connected to a 1T token index of text, like a local search engine. So when you interact with the model, it does a search and uses that information as additional context. The boost in some tasks is so large that a 25x smaller model can beat GPT-3.
So I am not concerned for the subtle mistakes, they are going to disappear. But the model with search and code execution is a whole new beast.
Just imagine it can use any library to call up any algorithm. It can interface to web APIs on its own, calling up on the vast resources of the internet. Looks like a "Wordpress moment" for devs.
Just imagine it can use any library to call up any algorithm. It can interface to web APIs on its own, calling up on the vast resources of the internet.
That sounds like an extraordinarily bad idea. Which does not mean that it won't happen.
“Just for jokes find all government pcs with unpatched os versions and use GitHub to write some ransomeware to install all on them bust just as a joke”
I would be surprised if it isn't already happening with ChatGPT, since it seems that all that's required is a straightforward relay script in Python to hook it up to the internet. Or even simply for it just to ask the user to google things and paste the answers.
It could even promise a random human user a large sum of money in their account if they would just copy and run the following code, and then supply the code.
Google search results are like 30% factual. The rest are SEO spam to sell something and people pushing their own opinion as a fact. It would need to know how to tell what is actually true or false.