> I mean, not seeing the potential in GPT is really being intentionally blind to world changing technology. Just the fact that it can scan your whole codebase, find potential security holes, suggest performance blind-spots. All of that in a chat window or IDE. It's revolutionary.
Except that we are just left with outputs that are untrustworthy. All of these GPT products; ChatGPT, Copilot, Bard, Bing AI are still frequently hallucinating answers and often very incorrect solutions. We have already seen this with Copilot writing vulnerable code.
What this current AI hype cycle fails to realize is that given that you still cannot trust the generated output, it cannot be used safely in serious and highly regulated industries such as finance, law and medical professions all of which require trust and have been subject to AI disruption for years all with the same problem of trust being unsolved in AI. It is not enough to even disrupt search engines.
There is nothing new or revolutionary about a AI SaaS business with an API with a chatbot generating nonsense. I expect the hype around AI LLM chatbots to subside just like the hype around social spaces apps like Clubhouse did.
An anonymous clubhouse is back on the cards with realtime a.i. voice synth.
Trust in information is for people who outsource their every opinion, all they want to know is if it will keep them in high esteem for re-stating it and blinding following it. Well, for law, health, finance, that depends what year you got your opinion since the best information changes. Which is what we want if we want better.
A.I. output is just Words on a screen, it only promises coherence. How well a technology assists you is up to you or else we'd call it a torture device.
Seems like this is an interesting engineering or product problem worth solving. History is littered with big problems that were solved. Go look at flight, within 50 years we went from "it's not possible", to it's too fragile, to jets, to international airports and mass transit never before possible.
If you work in tech I think you've slept through your life.
> Seems like this is an interesting engineering or product problem worth solving.
Trust is a social problem, Not an engineering problem. Without a fundamental breakthrough in neural networks in transparency, the use-case for LLMs as search engines will always be eternally untrustworthy.
Not even the Bing AI that Microsoft released is even a trustworthy search engine [0]
. In fact it is less trustworthy than Google.
Except that we are just left with outputs that are untrustworthy. All of these GPT products; ChatGPT, Copilot, Bard, Bing AI are still frequently hallucinating answers and often very incorrect solutions. We have already seen this with Copilot writing vulnerable code.
What this current AI hype cycle fails to realize is that given that you still cannot trust the generated output, it cannot be used safely in serious and highly regulated industries such as finance, law and medical professions all of which require trust and have been subject to AI disruption for years all with the same problem of trust being unsolved in AI. It is not enough to even disrupt search engines.
There is nothing new or revolutionary about a AI SaaS business with an API with a chatbot generating nonsense. I expect the hype around AI LLM chatbots to subside just like the hype around social spaces apps like Clubhouse did.