Published as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), the study reveals that while LLMs are proficient in language and capable of following instructions, they lack the ability to master new skills without direct guidance. As a result, they remain inherently controllable, predictable, and safe.
The researchers concluded that despite LLMs being trained on increasingly large datasets, they can continue to be used without significant safety concerns, though the potential for misuse still exists.
As these models evolve, they are expected to generate more sophisticated language and improve in responding to explicit prompts. However, it is highly unlikely that they will develop complex reasoning skills.
"The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus," said Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the study on the 'emergent abilities' of LLMs.
Led by Professor Iryna Gurevych at the Technical University of Darmstadt, the collaborative research team conducted experiments to evaluate LLMs' ability to tackle tasks they had not previously encountered-often referred to as emergent abilities.
For example, LLMs can answer questions about social situations without having been explicitly trained to do so. While earlier research suggested this capability stemmed from models 'knowing' about social situations, the researchers demonstrated that it is actually a result of LLMs' proficiency in a process known as in-context learning (ICL), where they complete tasks based on examples provided.
Through extensive experimentation, the team showed that the combination of LLMs' abilities to follow instructions (ICL), their memory, and their linguistic proficiency can account for both their capabilities and limitations.
Dr. Tayyar Madabushi explained, "The fear has been that as models grow larger, they will solve new problems that we cannot currently predict, potentially acquiring hazardous abilities like reasoning and planning. This concern was discussed extensively, such as at the AI Safety Summit last year at Bletchley Park, for which we were asked to provide commentary. However, our study shows that the fear of a model going rogue and doing something unexpected, innovative, and dangerous is unfounded."
He further emphasized, "Concerns over the existential threat posed by LLMs are not limited to non-experts and have been expressed by some leading AI researchers worldwide. However, our tests clearly demonstrate that these fears about emergent complex reasoning abilities in LLMs are not supported by evidence."
While acknowledging the need to address existing risks like AI misuse for creating fake news or facilitating fraud, Dr. Tayyar Madabushi argued that it would be premature to regulate AI based on unproven existential threats.
He noted, "For end users, relying on LLMs to interpret and execute complex tasks requiring advanced reasoning without explicit instructions is likely to lead to errors. Instead, users will benefit from clearly specifying their requirements and providing examples whenever possible, except for the simplest tasks."
Professor Gurevych added, "Our findings do not suggest that AI poses no threat at all. Rather, we demonstrate that the supposed emergence of complex thinking skills linked to specific threats is unsupported by evidence, and that we can effectively control the learning process of LLMs. Future research should, therefore, focus on other potential risks, such as the misuse of these models for generating fake news."
Research Report:Are Emergent Abilities in Large Language Models just In-Context Learning?
Related Links
University of Bath
All about the robots on Earth and beyond!
Subscribe Free To Our Daily Newsletters |
Subscribe Free To Our Daily Newsletters |