OpenAI’s Trust & Safety Head, Dave Willner, announced on Thursday that he has stepped down from his role and transitioned to a policy advisor role.
“I'm proud of everything our team has accomplished in my time at OpenAI, and while my job there was one of the coolest and most interesting jobs it's possible to have today, it had also grown dramatically in its scope and scale since I first joined,” Willner noted in a LinkedIn post unveiling his decision.
Explaining the move, Willner said that the development of the company’s AI-powered product ChatGPT made it impossible for him to continue balancing between spending time at work and with his family.
Willner previously served as the director of Airbnb’s Community Policy as well as the head of Trust and Safety at Otter. He spent the last 18 months working at OpenAI.
His departure comes at a critical time for AI companies.
Alongside all the excitement about the capabilities of generative AI platforms — which are based on large language models (LLMs) and are very fast at generating text, images, and music based on user prompts — there has been a growing list of questions:
- How to regulate business activities in the world of AI?
- How to mitigate harmful impacts across a wide spectrum of issues?
Indeed, trust and safety are foundational parts of these conversations. There have been many developments at OpenAI in recent weeks.
The U.S. Federal Trade Commission (FTC) has initiated an investigation accusing the company of potential violations of consumer protection laws and risks to personal data associated with its chatbot.
At the same time, OpenAI revealed it will be creating a new Superalignment team dedicated to managing security risks that followed the rapid development of the industry since last November.