A new groundbreaking achievement in the field of AI has brought us one step closer to machines capable of natural human interaction.
Scientists have successfully developed a neural network-based AI that demonstrates remarkable human-like language generalization abilities, marking a significant milestone in the field.
This advancement addresses an important component of human cognition known as systematic generalization, enabling AI systems to rapidly integrate new vocabulary into their lexicon and apply it effectively in various contexts.
How Does the Neural Network Fare Against ChatGPT?
The research, led by Max Kozlov and Celeste Biever, involved a comparative analysis between the newly created neural network, ChatGPT, and human participants.
Despite ChatGPT's remarkable conversational abilities, the neural network outperformed it, challenging the status quo of AI capabilities.
This groundbreaking development published in the renowned scientific journal Nature opens doors to more natural human-AI interactions, surpassing the capabilities of existing AI tools like OpenAI's ChatGPT, Google's Bard, and Microsoft's Bing Chat.
According to Paul Smolensky, a cognitive scientist from John Hopkins University, the neural network's human-like performance represents a major step forward in training networks to "be systematic."
It All Comes Down to Systematic Generalization
Systematic generalization is a critical aspect of language comprehension involving the effortless application of newly acquired words in conversation.
One such example is the way people can use the term "photobomb" in various contexts once they have grasped its meaning.
However, neural networks have long struggled with this ability without extensive training, sparking debates within the AI research community regarding their suitability as human cognition models.
To evaluate systematic generalization, the researchers conducted tests with 25 participants who learned new words within a constructed pseudo-language.
Participants excelled at applying abstract rules, demonstrating an inherent knack for systematic thinking. The researchers then employed a similar training approach to the neural network, enabling it to learn from its mistakes.
To the surprise of the researchers, the method produced remarkable results. The neural network's performance closely mirrored that of human participants and surpassed even ChatGPT's capabilities.
"It’s not magic, it’s practice," explained Brendan Lake, one of the co-authors of the study.
"Much like a child also gets practice when learning their native language, the models improve their compositional skills through a series of compositional learning tasks."
While this study gives insight into how humans can enhance the learning efficiency of neural networks, the real challenge lies in scaling up this training method to handle larger datasets and expanding into additional domains, including image processing.
Lake hopes to develop more robust neural networks by drawing insights from how humans naturally develop systematic thinking skills from a young age.
Edited by Nikola Djuric