Humans in the loop help robots find their way by Staff Writers Houston TX (SPX) Jun 29, 2022
Just like us, robots can't see through walls. Sometimes they need a little help to get where they're going. Engineers at Rice University have developed a method that allows humans to help robots "see" their environments and carry out tasks. The strategy called Bayesian Learning IN the Dark - BLIND, for short - is a novel solution to the long-standing problem of motion planning for robots that work in environments where not everything is clearly visible all the time. The peer-reviewed study led by computer scientists Lydia Kavraki and Vaibhav Unhelkar and co-lead authors Carlos Quintero-Pena and Constantinos Chamzas of Rice's George R. Brown School of Engineering was presented at the Institute of Electrical and Electronics Engineers' International Conference on Robotics and Automation in late May. The algorithm developed primarily by Quintero-Pena and Chamzas, both graduate students working with Kavraki, keeps a human in the loop to "augment robot perception and, importantly, prevent the execution of unsafe motion," according to the study. To do so, they combined Bayesian inverse reinforcement learning (by which a system learns from continually updated information and experience) with established motion planning techniques to assist robots that have "high degrees of freedom" - that is, a lot of moving parts. To test BLIND, the Rice lab directed a Fetch robot, an articulated arm with seven joints, to grab a small cylinder from a table and move it to another, but in doing so it had to move past a barrier. "If you have more joints, instructions to the robot are complicated," Quintero-Pena said. "If you're directing a human, you can just say, 'Lift up your hand.'" But a robot's programmers have to be specific about the movement of each joint at each point in its trajectory, especially when obstacles block the machine's "view" of its target. Rather than programming a trajectory up front, BLIND inserts a human mid-process to refine the choreographed options - or best guesses - suggested by the robot's algorithm. "BLIND allows us to take information in the human's head and compute our trajectories in this high-degree-of-freedom space," Quintero-Pena said. "We use a specific way of feedback called critique, basically a binary form of feedback where the human is given labels on pieces of the trajectory," he said. These labels appear as connected green dots that represent possible paths. As BLIND steps from dot to dot, the human approves or rejects each movement to refine the path, avoiding obstacles as efficiently as possible. "It's an easy interface for people to use, because we can say, 'I like this' or 'I don't like that,' and the robot uses this information to plan," Chamzas said. Once rewarded with an approved set of movements, the robot can carry out its task, he said. "One of the most important things here is that human preferences are hard to describe with a mathematical formula," Quintero-Pena said. "Our work simplifies human-robot relationships by incorporating human preferences. That's how I think applications will get the most benefit from this work." "This work wonderfully exemplifies how a little, but targeted, human intervention can significantly enhance the capabilities of robots to execute complex tasks in environments where some parts are completely unknown to the robot but known to the human," said Kavraki, a robotics pioneer whose resume includes advanced programming for NASA's humanoid Robonaut aboard the International Space Station. "It shows how methods for human-robot interaction, the topic of research of my colleague Professor Unhelkar, and automated planning pioneered for years at my laboratory can blend to deliver reliable solutions that also respect human preferences."
Research Report:Human-Guided Motion Planning in Partially Observable Environments
Tough new robots will aim to think and act for themselves on Earth and beyond Manchester UK (SPX) Jun 29, 2022 A new generation of smart robots is being developed at The University of Manchester as part of an ambitious R and D programme to help the UK maintain its leadership in automatation technologies, These new AI-powered machines will be designed to think and act for themselves in some of the most hazardous and toughest places on Earth - and beyond. These robots will be challenged to carry out work too dangerous for humans, 'Hot robotic' systems were originally designed to work in radioactive env ... read more
|
|
The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us. |