Amidst the rapidly accelerating developments and explosive hype surrounding artificial intelligence (AI), ethicists caution against the inevitable and unintended consequences that come with every technological advancement.
Global Deloitte AI Institute Executive Director Beena Ammanath discusses how AI deployment is akin to "building Jurassic Park" and that companies are "putting some danger signs on the fences, but leaving all the gates open."
"The challenge with new language models is (that) they blend fact and fiction," states Ammananth, who also leads Deloitte's Trustworthy Tech Ethics. "This is a new dimension that generative AI has brought in."
"It spreads misinformation effectively. It cannot understand the content. So, it can spout out completely logical-sounding content but is incorrect. And it delivers it with complete confidence,” she adds.
Ammananth gave these statements at the same time that Microsoft released an AI-powered Bing, currently only available for desktops.
The Shortcomings of ChatGPT
Due to popular demand, OpenAI's trending AI chatbot recently unleashed its $20 pilot ChatGPT Plus. Users have flocked to try both free and premium versions despite existing limitations.
When using ChatGPT, users will be met with pointers, one of which is the set of limitations. ChatGPT itself states that it "may occasionally generate incorrect information" and that it may also "occasionally produce harmful instructions or biased content."
AI experts have even coined the term "hallucination" for this AI phenomenon. This refers to when an AI gives a confident yet incorrect answer outside its training data due to confusion or misleading information.
OpenAI Sam Altman recognized the issue and even tweeted, "It's a mistake to be relying on it (ChatGPT) for anything important right now. It’s a preview of progress; we have lots of work on robustness and truthfulness."
ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness.— Sam Altman (@sama) December 11, 2022
it's a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.
Media organization CNET witnessed the consequences of AI chatbots first-hand when it had to correct an article that had "substantial" mistakes. The news outlet revealed that it had been using ChatGPT to write news since November last year.
Google Changes Tune
The sentiment of being careful with AI was initially echoed by Google, with Google Research and AI Senior Fellow and SVP Jeff Dean stating that the company will be mitigating "reputational risk" by moving "more conservatively than a small startup."
He explained that truthfulness, safety and objectivity are paramount for a search engine. While Google will eventually roll out its AI products, it will be careful as "it’s super important we get this right."
This was supported by Google Chief Executive Officer Sundar Pichai, who stressed that they "must bring experiences rooted in these models to the world boldly and responsibly."
Interestingly, Pichai released a separate internal note to staff ordering all hands "on deck" and that Google will be "enlisting every Googler to help shape Bard and contribute through a special company-wide."
On Monday, Google announced that it would release Bard in response to the upgraded Bing and ChatGPT. Unfortunately, Bard offered incorrect answers during its demo during its demo, dropping Google parent company Alphabet's shares by 7.7%.