U.S. Senate Sets Their Eyes on ChatGPT

U.S. Senate Sets Their Eyes on ChatGPT

News by Marge Serrano
Published: February 16, 2023

Ironically one of the hottest keywords in Google searches right now, ChatGPT, finally began to draw the attention of lawmakers and slash politicians after almost two months of being a mainstay in digital mainstream media. 

The controversial artificial intelligence chatbot garnered overwhelming attention ranging from the blatant adulation of its ever-growing user base to the scathing criticism of the education sector. 

Last month, ChatGPT reached 100 million active users– a feat considering social media titan Tiktok, which had its viral era during the pandemic, took nine months to get the exact figures. It only took the famed chatbot five days to reach its first 1 million users. 

As with every other tool in the shed, ChatGPT's impressive capabilities have just as many positive and negative implications. And just like its astounding viral growth, its equally scaling potential state-level security risks raised calls for regulation. 

 

An Overpowered Fake News Bot?  

ChatGPT's viral growth drew the ire of various professors, with students using the app to cheat on their essays and exams. But ChatGPT's potential extent of misuse is more than just a few undergraduates risking their degrees with AI plagiarism. 

BNH.AI Managing Partner Andrew Burt cleverly points out, "the whole value proposition of these AI systems is that they can generate content at scales and speeds that humans simply can't." 

The discourse on whether AI will revolutionize the labor system or throw it into utter chaos aside, ChatGPT's appeal is more than just speeding up people's jobs or scaling businesses. The chatbot is a brilliant storyteller, which is perfect for embellishing news. 

Deloitte Trustworthy Tech Ethics Leader Beena Ammanath states, "It spreads misinformation effectively. It cannot understand the content. So, it can spout out completely logical-sounding content but is incorrect. And it delivers it with complete confidence." 

"I would expect malicious actors, non-state actors and state actors that have interests that are adversarial to the United States to be using these systems to generate information that could be wrong or could be harmful," Burt adds. 

 

Lukewarm But Firm Stance on ChatGPT 

Democrat Representative Ted Lieu is personally thrilled by AI and all the "incredible ways it will continue to advance society." However, he also adds that he is "freaked out by AI, specifically AI that is left unchecked and unregulated." 

While Lieu was using ChatGPT, he was struck by the idea of working with the AI on regulation which is why he proposed a resolution written by ChatGPT itself the Congress. 

The resolution states that the Congress should "ensure that the development and deployment of AI are done in a way that is safe, ethical, and respects the rights and privacy of all Americans and that the benefits of AI are widely distributed and the risks are minimized." 

Sam Altman, OpenAI's CEO, is coordinating with lawmakers himself. He met with Senators Ron Wyden, Mark Warner, Richard Blumenthal and Representative Jake Auchincloss last January to discuss the rapidly advancing development of AIs and how they could be utilized. 

Regardless, the lawmakers were also no-nonsense about the risks of AI. Wyden Aide Keith Chu states, "While Senator Wyden believes AI has tremendous potential to speed up innovation and research, he is laser-focused on ensuring automated systems don't automate discrimination in the process." 

 

OpenAI's Take on the Matter 

More than any other governing entity, OpenAI recognizes the risks of developing AIs. It clearly states its mission "To ensure that artificial general intelligence benefits all of humanity," adding that it will "build safe and beneficial AGI." 

The leading AI organization also admits to the shortcomings of ChatGPT, particularly its tendency to hallucinate, clearly stating that it "may occasionally generate incorrect information" and that it "may occasionally produce harmful instructions or biased content" at the start of the chat. 

OpenAI is also forthcoming with its own tool for plagiarism, stating, "We don't want ChatGPT to be used for misleading purposes in schools or anywhere else, so we're already developing mitigations to help anyone identify text generated by that system." 

When asked about OpenAI's stance on regulation, OpenAI's chief technology officer Mira Murati said that their company welcomes feedback from everyone and stated, "It's not too early (for regulators to get involved)." 

Subscribe to Spotlight Newsletter
Subscribe to our newsletter to get the latest industry news