OpenAI, the company behind the artificial intelligence (AI) chatbot ChatGPT, is shutting down its AI classifier tool due to its low accuracy rate.
The tool was developed to distinguish texts written by humans from those written by AI.
“We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated,” the company wrote in an updated blog post.
Before it shut the tool down, the AI company admitted that the classifier was “not fully reliable,” correctly identifying only 26% of AI-written text as “likely AI-written” and incorrectly identifying 9% of human-written text as AI-written in an evaluation it conducted.
It also warned users not to use the AI classifier as a decision-making tool, noting that it was “very unreliable” for text input below 1,000 characters and text written in other languages.
The company has yet to announce when it will release a more refined version of the tool.
OpenAI’s AI classifier tool was first launched in January to address issues of misinformation and academic dishonesty with the use of ChatGPT.
While it is far from perfect, ChatGPT continues to ramp up its services in other aspects. Last May, the AI chatbot introduced an incognito mode that lets users turn off their chat history.
It also recently rolled out on both iOS and Android devices, allowing users to converse with the chatbot while on the go.