Artificial intelligence startup OpenAI announced on Thursday that ChatGPT would be getting an upgrade in which users can customize the chatbot to circumvent existing bias.
The pivotal move comes after numerous user feedback and complaints about ChatGPT being politically biased. Right-wing users criticized the chatbot for being dominantly leftist.
As a result, OpenAI will allow users to tweak the app to get around the political bias that the chatbot may have developed because of its limited albeit extensive training data. OpenAI also wants ChatGPT to have diversity in its perspective.
OpenAI states in a blog, “This will mean allowing system outputs that other people (ourselves included) may strongly disagree with," and adds that there will "always be some bounds on system behavior.”
Criticisms and Shortcomings of ChatGPT
Recently, ChatGPT also got criticized for its lightning-speed development, with an ethicist cautioning the public that there are not enough guard rails around the chatbot.
Media organizations also raised the danger of the new AI-powered Bing, claiming that the technology may have been prematurely released to satisfy corporate greed.
At the moment, Microsoft is relying on user feedback to help tweak Bing before becoming available to the broader public, starting with workarounds on its AI being “provoked” to respond against OpenAI's code of conduct unintentionally.
How ChatGPT's Practicum Works
OpenAI explained in its blog that generative AI is initially trained on a large amount of textual information sourced from the Internet.
After giving the chatbot some stock knowledge, human interaction finetunes the chatbot by simulating various scenarios and the corresponding correct response.
An example would be how a reviewer would try to taunt or manipulate ChatGPT into discussing banned topics such as violence or pornography, in which the AI bot should ideally generate "I can't answer that" as a response.
Another situation would be ChatGPT providing different views and opinions when discussing controversies instead of trying to arrive at the “correct answer” according to OpenAI's guidelines.