Robot Technology News
ROBO SPACE
New Study Confirms Large Language Models Pose No Existential Risk
illustration only
New Study Confirms Large Language Models Pose No Existential Risk
by Sophie Jenkins
London, UK (SPX) Aug 13, 2024

ChatGPT and other large language models (LLMs) do not have the capability to learn independently or develop new skills, meaning they pose no existential threat to humanity, according to recent research conducted by the University of Bath and the Technical University of Darmstadt in Germany.

Published as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), the study reveals that while LLMs are proficient in language and capable of following instructions, they lack the ability to master new skills without direct guidance. As a result, they remain inherently controllable, predictable, and safe.

The researchers concluded that despite LLMs being trained on increasingly large datasets, they can continue to be used without significant safety concerns, though the potential for misuse still exists.

As these models evolve, they are expected to generate more sophisticated language and improve in responding to explicit prompts. However, it is highly unlikely that they will develop complex reasoning skills.

"The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus," said Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the study on the 'emergent abilities' of LLMs.

Led by Professor Iryna Gurevych at the Technical University of Darmstadt, the collaborative research team conducted experiments to evaluate LLMs' ability to tackle tasks they had not previously encountered-often referred to as emergent abilities.

For example, LLMs can answer questions about social situations without having been explicitly trained to do so. While earlier research suggested this capability stemmed from models 'knowing' about social situations, the researchers demonstrated that it is actually a result of LLMs' proficiency in a process known as in-context learning (ICL), where they complete tasks based on examples provided.

Through extensive experimentation, the team showed that the combination of LLMs' abilities to follow instructions (ICL), their memory, and their linguistic proficiency can account for both their capabilities and limitations.

Dr. Tayyar Madabushi explained, "The fear has been that as models grow larger, they will solve new problems that we cannot currently predict, potentially acquiring hazardous abilities like reasoning and planning. This concern was discussed extensively, such as at the AI Safety Summit last year at Bletchley Park, for which we were asked to provide commentary. However, our study shows that the fear of a model going rogue and doing something unexpected, innovative, and dangerous is unfounded."

He further emphasized, "Concerns over the existential threat posed by LLMs are not limited to non-experts and have been expressed by some leading AI researchers worldwide. However, our tests clearly demonstrate that these fears about emergent complex reasoning abilities in LLMs are not supported by evidence."

While acknowledging the need to address existing risks like AI misuse for creating fake news or facilitating fraud, Dr. Tayyar Madabushi argued that it would be premature to regulate AI based on unproven existential threats.

He noted, "For end users, relying on LLMs to interpret and execute complex tasks requiring advanced reasoning without explicit instructions is likely to lead to errors. Instead, users will benefit from clearly specifying their requirements and providing examples whenever possible, except for the simplest tasks."

Professor Gurevych added, "Our findings do not suggest that AI poses no threat at all. Rather, we demonstrate that the supposed emergence of complex thinking skills linked to specific threats is unsupported by evidence, and that we can effectively control the learning process of LLMs. Future research should, therefore, focus on other potential risks, such as the misuse of these models for generating fake news."

Research Report:Are Emergent Abilities in Large Language Models just In-Context Learning?

Related Links
University of Bath
All about the robots on Earth and beyond!

Subscribe Free To Our Daily Newsletters
Tweet

RELATED CONTENT
The following news reports may link to other Space Media Network websites.
ROBO SPACE
OpenAI worries its AI voice may charm users
San Francisco (AFP) Aug 9, 2024
OpenAI says it is concerned that a realistic voice feature for its artificial intelligence might cause people to bond with the bot at the cost of human interactions. The San Francisco-based company cited literature which it said indicates that chatting with AI as one might with a person can result in misplaced trust and that the high quality of the GPT-4o voice may exacerbate that effect. "Anthropomorphization involves attributing human-like behaviors and characteristics to nonhuman entities, su ... read more

ROBO SPACE
Tengden Completes Test Flight of China's Largest Cargo Drone

ELTA North America Excels in Pentagon Drone Defense Swarm Test

Russia says drones, missiles shot down over Kursk region

Russia says destroyed 76 Ukrainian drones

ROBO SPACE
Cleveland-Made Automated Tech Embarks on Space Mission

AFRL Collaborative Automation For Manufacturing Systems Laboratory opens

UCLA Engineers Develop Shape-Shifting Metamaterial Inspired by Classic Toys

ICEYE Expands SAR Constellation with Four New Satellites

ROBO SPACE
URI-led research proposes new approach to scale quantum processors

Advances in Atomic-Level Photoswitching for Nanoscale Optoelectronics

HKUST Engineers Develop Full-Color Fiber LEDs for Advanced Wearable Displays

Achieving quantum memory in the hard X-ray range

ROBO SPACE
Rwanda signs deal with US nuclear firm for mini-reactors

Safety 'deteriorating' at Ukraine nuclear plant: UN watchdog

Fire at cooling tower of Zaporizhzhia nuclear plant

Russian nuclear delegation in Burkina to discuss mooted plant

ROBO SPACE
US defense chief says 9/11 suspects should stand trial

US scraps plea deal with 9/11 mastermind: Pentagon

U.S. Treasury sanctions 3 in Africa for financing Islamic State

Iraq hangs 10 convicted of 'terrorism': security and health sources

ROBO SPACE
China plans to adopt volume-based emissions reduction targets

Japan schoolkids wilt in under-insulated classrooms

Net zero goal critical to Earth's stability: study

Air New Zealand scraps 2030 emissions targets

ROBO SPACE
SwRI Expands EV Battery Research with Launch of EVESE-II Consortium

Argentine lithium a boon for some, doom for others

Buffalo develops world's highest-performance superconducting wire segment

Thousands protest in Serbian capital against lithium mine

ROBO SPACE
Shenzhou-18 Crew Tests Fire Alarms and Conducts Medical Procedures in Space

Astronauts on Tiangong Space Station Complete Fire Safety Drill

Shenzhou XVIII Crew Conducts Emergency Drill on Tiangong Space Station

Beijing Unveils 'Rocket Street' to Boost Commercial Space Sector

Subscribe Free To Our Daily Newsletters




The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.