New robot rolls with the rules of pedestrian conduct by Staff Writers Boston MA (SPX) Aug 31, 2017
Just as drivers observe the rules of the road, most pedestrians follow certain social codes when navigating a hallway or a crowded thoroughfare: Keep to the right, pass on the left, maintain a respectable berth, and be ready to weave or change course to avoid oncoming obstacles while keeping up a steady walking pace. Now engineers at MIT have designed an autonomous robot with "socially aware navigation," that can keep pace with foot traffic while observing these general codes of pedestrian conduct. In drive tests performed inside MIT's Stata Center, the robot, which resembles a knee-high kiosk on wheels, successfully avoided collisions while keeping up with the average flow of pedestrians. The researchers have detailed their robotic design in a paper that they will present at the IEEE Conference on Intelligent Robots and Systems in September. "Socially aware navigation is a central capability for mobile robots operating in environments that require frequent interactions with pedestrians," says Yu Fan "Steven" Chen, who led the work as a former MIT graduate student and is the lead author of the study. "For instance, small robots could operate on sidewalks for package and food delivery. Similarly, personal mobility devices could transport people in large, crowded spaces, such as shopping malls, airports, and hospitals." Chen's co-authors are graduate student Michael Everett, former postdoc Miao Liu, and Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics at MIT.
Social drive Chen and his colleagues used standard approaches to solve the problems of localization and perception. For the latter, they outfitted the robot with off-the-shelf sensors, such as webcams, a depth sensor, and a high-resolution lidar sensor. For the problem of localization, they used open-source algorithms to map the robot's environment and determine its position. To control the robot, they employed standard methods used to drive autonomous ground vehicles. "The part of the field that we thought we needed to innovate on was motion planning," Everett says. "Once you figure out where you are in the world, and know how to follow trajectories, which trajectories should you be following?" That's a tricky problem, particularly in pedestrian-heavy environments, where individual paths are often difficult to predict. As a solution, roboticists sometimes take a trajectory-based approach, in which they program a robot to compute an optimal path that accounts for everyone's desired trajectories. These trajectories must be inferred from sensor data, because people don't explicitly tell the robot where they are trying to go. "But this takes forever to compute. Your robot is just going to be parked, figuring out what to do next, and meanwhile the person's already moved way past it before it decides 'I should probably go to the right,'" Everett says. "So that approach is not very realistic, especially if you want to drive faster." Others have used faster, "reactive-based" approaches, in which a robot is programmed with a simple model, using geometry or physics, to quickly compute a path that avoids collisions. The problem with reactive-based approaches, Everett says, is the unpredictability of human nature - people rarely stick to a straight, geometric path, but rather weave and wander, veering off to greet a friend or grab a coffee. In such an unpredictable environment, such robots tend to collide with people or look like they are being pushed around by avoiding people excessively. "The knock on robots in real situations is that they might be too cautious or aggressive," Everett says. "People don't find them to fit into the socially accepted rules, like giving people enough space or driving at acceptable speeds, and they get more in the way than they help."
Training days They used reinforcement learning, a type of machine learning approach, in which they performed computer simulations to train a robot to take certain paths, given the speed and trajectory of other objects in the environment. The team also incorporated social norms into this offline training phase, in which they encouraged the robot in simulations to pass on the right, and penalized the robot when it passed on the left. "We want it to be traveling naturally among people and not be intrusive," Everett says. "We want it to be following the same rules as everyone else." The advantage to reinforcement learning is that the researchers can perform these training scenarios, which take extensive time and computing power, offline. Once the robot is trained in simulation, the researchers can program it to carry out the optimal paths, identified in the simulations, when the robot recognizes a similar scenario in the real world. The researchers enabled the robot to assess its environment and adjust its path, every one-tenth of a second. In this way, the robot can continue rolling through a hallway at a typical walking speed of 1.2 meters per second, without pausing to reprogram its route. "We're not planning an entire path to the goal - it doesn't make sense to do that anymore, especially if you're assuming the world is changing," Everett says. "We just look at what we see, choose a velocity, do that for a tenth of a second, then look at the world again, choose another velocity, and go again. This way, we think our robot looks more natural, and is anticipating what people are doing."
Crowd control "We wanted to bring it somewhere where people were doing their everyday things, going to class, getting food, and we showed we were pretty robust to all that," Everett says. "One time there was even a tour group, and it perfectly avoided them." Everett says going forward, he plans to explore how robots might handle crowds in a pedestrian environment. "Crowds have a different dynamic than individual people, and you may have to learn something totally different if you see five people walking together," Everett says. "There may be a social rule of, 'Don't move through people, don't split people up, treat them as one mass.' That's something we're looking at in the future."
Research Report: Socially aware motion planning with deep reinforcement learning
Chicago IL (SPX) Aug 28, 2017 New research from a team of University of Illinois Mechanical Science and engineering professors and students, published as an invited paper in Smart Materials and Structures, details how origami structures and bio-inspired design can be used to create a crawling robot. Assistant professors Aimy Wissa and Sameh Tawfick, along with graduate student Alexander Pagano and undergraduates Tongxi ... read more Related Links Massachusetts Institute of Technology All about the robots on Earth and beyond!
|
|
The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us. |