OpenAI Discusses AI Regulation and Risk Mitigation for Future

OpenAI Discusses AI Regulation and Risk Mitigation for Future

News by Roberto OrosaRoberto Orosa
Published: May 23, 2023

With the rapid growth of AI tech, OpenAI executives Sam Altman, Greg Brockman, and Ilya Sutskever have addressed the risks of superintelligence and expressed the need for international authority in the company’s latest blog post. 

“We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination,” the joint statement read.  

The artificial intelligence giant, popular for launching ChatGPT and GPT-4, started off by explaining the need for coordination among leading development efforts to ensure that superintelligence is safe and smoothly integrated within society.  

The company continued by giving examples of how this coordination can be implemented, such as setting up government projects that "many current efforts become a part of," or come to a collective agreement that the rate of growth in AI be limited to a certain rate per year. 

Additionally, the company emphasized on the need for an IAEA-like agency for superintelligence efforts, where systems that surpass a certain capability threshold will require international authorities to step in for inspections, audits, tests, and restrictions.  

“As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it,” it wrote.  

OpenAI also expressed the need to make superintelligence safe – an “open research question that we and others are putting a lot of effort into.”  

Get connected with the right AI companies for your project.
GET STARTED

The rapid development of AI has raised ethical concern among the tech community, AI companies and industry leaders from around the world.  

Last March, Elon Musk along with other tech leaders signed a joint statement calling for at least a six-month pause on giant AI experiments, citing “profound risks to society and humanity,” and adding that the pause be applied to systems “more powerful than GPT-4.” 

“This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” the letter wrote.  

In OpenAI’s recent statement, the company appears to align with these sentiments to some degree. 

 “We believe people around the world should democratically decide on the bounds and defaults for AI systems. We don't yet know how to design such a mechanism, but we plan to experiment with its development,” it explained. 

Subscribe to Spotlight Newsletter
Subscribe to our newsletter to get the latest industry news
"