FTC Probing OpenAI Over ChatGPT Exposing User Data

FTC Probing OpenAI Over ChatGPT Exposing User Data

News by Nikola DjuricNikola Djuric
Published: July 13, 2023

The United States Federal Trade Commission (FTC) has initiated an investigation into ChatGPT developer OpenAI, accusing the company of potential violations of consumer protection laws and risks to personal data associated with its chatbot.

According to a report published by the Washington Post on Thursday, the FTC has issued a comprehensive 20-page request for records from OpenAI, seeking insights into how it addresses the risks associated with its AI models.

If the FTC determines that OpenAI violated these laws, it could impose fines or subject the company to a consent decree, which would regulate its data handling practices.

Specifically, the FTC has called on OpenAI to provide detailed information regarding complaints that its products made false, misleading, disparaging, or harmful statements about individuals.

The agency is investigating whether OpenAI's practices constitute unfair or deceptive conduct resulting in reputational harm to consumers.

Furthermore, the FTC is examining a security incident disclosed by OpenAI in March, where a system bug allowed some users to access payment-related information and other users' chat history.

Get connected with the right cybersecurity company for your project.

On March 20, ChatGPT experienced an outage that caused some users to see the titles of other active users' chat histories. In some cases, the first message of a newly created conversation was also visible in someone else's chat history if both users were active around the same time.

While OpenAI patched the bug and restored ChatGPT to service shortly after the accident, the company also noted that some userspersonal information may have been exposed.

FTC Imposing Substantial Fines on Big Tech

As the leading regulator overseeing Silicon Valley, the FTC has previously imposed substantial fines on other Big Tech such as Meta, Twitter, and Amazon for alleged consumer protection violations.

For example, the FTC proposed a $90 million fine on Meta for violating a 2020 privacy order in May, alleging that Meta had misled parents about their ability to control who their children communicated with through its Messenger Kids app, and misrepresented the access it provided some app developers to private user data. 

Also in May, the FTC fined Amazon $25 million for violating the Children's Online Privacy Protection Act (COPPA). In its decision, the FTC alleged that Amazon had collected personal information from children under the age of 13 without parental consent.

A year ago, the FTC reached a settlement with Twitter to resolve allegations that the company violated a 2011 consent decree by deceptively using account security data to sell targeted ads. As part of the settlement, Twitter agreed to pay a $150 million civil penalty and to implement several new privacy safeguards.

Get connected with the right AI development company for your project.
Subscribe to Spotlight Newsletter
Subscribe to our newsletter to get the latest industry news