Most people overlook artificial intelligence despite flawless advice by Staff Writers Adelphi MD (SPX) Feb 01, 2019
If you were convinced you knew the way home, would you still turn on your GPS? Army scientists recently attempted to answer a similar question due to an ongoing concern that artificial intelligence, which can be opaque and frustrating to many people, may not be helpful in battlefield decision making. "The U.S. Army continues to push the modernization of its forces, with notable efforts including the development of smartphone-based software for real-time information delivery such as the Android Tactical Assault Kit, or ATAK, and the allocation of significant funding towards researching new AI and machine learning methods to assist command and control personnel," said Dr. James Schaffer, scientist for RDECOM's Army Research Laboratory, the Army's corporate research laboratory (ARL), at ARL West in Playa Vista, California. According to Schaffer, despite these advances, a significant gap in basic knowledge about the use of AI still exists, and it is unknown which factors of AI will or will not help military decision-making processes. University and corporate research has made significant headway into solving this problem for applications like movie and restaurant recommendations, but the findings do not exactly translate to the military world. "For instance, many research studies and A/B testing, such as those performed by Amazon, have experimented with different forms of persuasion, argumentation and user interface styles to determine the winning combination that moves the most product or inspires the most trust," Schaffer said. "Unfortunately, there are big gaps between the assumptions in these low-risk domains and military practice." The Army's research, which was a collaboration between Army scientists and university researchers at the University of California, Santa Barbara, hypothesizes that many people trust their own abilities far more than that of a computer, which will affect their judgment when pressured to perform. According to Schaffer, this implies that even if flawless AI could be created, some people would not listen to the AI's advice. The researchers constructed an abstract similar to the Iterated Prisoner's Dilemma -a game where players must choose to cooperate with or defect against their co-players in every round - in order to control all relevant factors. The Iterated Prisoner's Dilemma has been used in regards to several real-world problems, such as military arms races, public sharing of resources and international politics. An online version of the game was developed by the research team, where players obtained points by making good decisions in each round. An AI was used to generate advice in each round, which was shown alongside the game interface, and made a suggestion about which decision should be made by the player. Researchers had an opportunity to design an AI that always recommended the optimal course of action. However, just like in real life, players were required to access the AI's advice manually, just as you must manually switch on GPS, and were free to accept or ignore its suggestion. The researchers also presented different versions of this AI - some were deliberately inaccurate, some required game information to be entered manually, and some justified their suggestions with rational arguments. All variations of these AI treatments were tested so that interaction effects between AI configurations could be studied. People were invited to play the game online and researchers collected a profile of each player and monitored their behavior. For each player, researchers asked about their familiarity with the game while also measuring their true competency. Additionally, a test was given halfway through playing that measured awareness of gameplay elements. "What was discovered might trouble some advocates of AI - two-thirds of human decisions disagreed with the AI, regardless of the number of errors in the suggestions," Schaffer said. The higher the player estimated their familiarity with the game beforehand, the less the AI was used, an effect that was still observed when controlling for the AI's accuracy. This implies that improving a system's accuracy will not be able to increase system adoption in this population. "This might be a harmless outcome if these players were really doing better - but they were in fact performing significantly worse than their humbler peers, who reported knowing less about the game beforehand," Schaffer said. "When the AI attempted to justify its suggestions to players who reported high familiarity with the game, reduced awareness of gameplay elements was observed - a symptom of over-trusting and complacency." Despite these findings, a corresponding increase in agreement with AI suggestions was not observed. This presents a catch-22 for system designers: incompetent users need the AI most of all, but are the least likely to be swayed by rational justifications, Schaffer said. Incompetent users were also the most likely to say that they trusted the AI, which was studied through a post-game questionnaire. "This contrasts sharply with their observed neglect of the AI's suggestions, demonstrating that people are not always honest, or may not always be aware of their own behavior," Schaffer said. For Schaffer and the team, this research highlights ongoing issues in the usability of complex, opaque systems such as AI, despite continued advances in accuracy, robustness and speed. "Rational arguments have been demonstrated to be ineffective on some people, so designers may need to be more creative in designing interfaces for these systems," Schaffer said. Schaffer said this could be accomplished through appealing to emotions or competitiveness, or even by removing presence from the AI, such that users do not register its presence and thus do not anchor on their own abilities. "Despite challenges in human-computer interaction, AI-like systems will be an integral part of the Army's strategy over the next five years," Schaffer said. "One of the principle challenges facing military operations today is rapid response from guerilla adversaries, who often have shorter command chains and thus can act and react more rapidly than the U.S. Armed Forces. Complex systems that can rapidly react to a changing environment and expedite information flow can improve response times and help maintain op-tempo - but only if given sufficient trust by its users." The research group continues to experiment with different interfaces for AI systems so that all types of people can benefit from increasingly effective automated knowledge.
Engineers program marine robots to take calculated risks Boston MA (SPX) Feb 01, 2019 We know far less about the Earth's oceans than we do about the surface of the moon or Mars. The sea floor is carved with expansive canyons, towering seamounts, deep trenches, and sheer cliffs, most of which are considered too dangerous or inaccessible for autonomous underwater vehicles (AUV) to navigate. But what if the reward for traversing such places was worth the risk? MIT engineers have now developed an algorithm that lets AUVs weigh the risks and potential rewards of exploring an unkno ... read more
|
|
The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us. |