Robot Technology News
ROBO SPACE
Can robots learn from machine dreams?
MIT CSAIL researchers (left to right) Alan Yu, an undergraduate in electrical engineering and computer science (EECS); Phillip Isola, associate professor of EECS; and Ge Yang, a postdoctoral associate, developed an AI-powered simulator that generates unlimited, diverse, and realistic training data for robots. Robots trained in this virtual environment can seamlessly transfer their skills to the real world, performing at expert levels without additional fine-tuning. Credits:Photo: Michael Grimmett/MIT CSAIL
Can robots learn from machine dreams?
by Rachel Gordon | MIT CSAIL
Boston MA (SPX) Nov 20, 2024

For roboticists, one challenge towers above all others: generalization - the ability to create machines that can adapt to any environment or condition. Since the 1970s, the field has evolved from writing sophisticated programs to using deep learning, teaching robots to learn directly from human behavior. But a critical bottleneck remains: data quality. To improve, robots need to encounter scenarios that push the boundaries of their capabilities, operating at the edge of their mastery. This process traditionally requires human oversight, with operators carefully challenging robots to expand their abilities. As robots become more sophisticated, this hands-on approach hits a scaling problem: the demand for high-quality training data far outpaces humans' ability to provide it.

Now, a team of MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers has developed a novel approach to robot training that could significantly accelerate the deployment of adaptable, intelligent machines in real-world environments. The new system, called "LucidSim," uses recent advances in generative AI and physics simulators to create diverse and realistic virtual training environments, helping robots achieve expert-level performance in difficult tasks without any real-world data.

LucidSim combines physics simulation with generative AI models, addressing one of the most persistent challenges in robotics: transferring skills learned in simulation to the real world. "A fundamental challenge in robot learning has long been the 'sim-to-real gap' - the disparity between simulated training environments and the complex, unpredictable real world," says MIT CSAIL postdoc Ge Yang, a lead researcher on LucidSim. "Previous approaches often relied on depth sensors, which simplified the problem but missed crucial real-world complexities."

The multipronged system is a blend of different technologies. At its core, LucidSim uses large language models to generate various structured descriptions of environments. These descriptions are then transformed into images using generative models. To ensure that these images reflect real-world physics, an underlying physics simulator is used to guide the generation process.

The birth of an idea: From burritos to breakthroughs
The inspiration for LucidSim came from an unexpected place: a conversation outside Beantown Taqueria in Cambridge, Massachusetts. ??"We wanted to teach vision-equipped robots how to improve using human feedback. But then, we realized we didn't have a pure vision-based policy to begin with," says Alan Yu, an undergraduate student in electrical engineering and computer science (EECS) at MIT and co-lead author on LucidSim. "We kept talking about it as we walked down the street, and then we stopped outside the taqueria for about half-an-hour. That's where we had our moment."

To cook up their data, the team generated realistic images by extracting depth maps, which provide geometric information, and semantic masks, which label different parts of an image, from the simulated scene. They quickly realized, however, that with tight control on the composition of the image content, the model would produce similar images that weren't different from each other using the same prompt. So, they devised a way to source diverse text prompts from ChatGPT.

This approach, however, only resulted in a single image. To make short, coherent videos that serve as little "experiences" for the robot, the scientists hacked together some image magic into another novel technique the team created, called "Dreams In Motion." The system computes the movements of each pixel between frames, to warp a single generated image into a short, multi-frame video. Dreams In Motion does this by considering the 3D geometry of the scene and the relative changes in the robot's perspective.

"We outperform domain randomization, a method developed in 2017 that applies random colors and patterns to objects in the environment, which is still considered the go-to method these days," says Yu. "While this technique generates diverse data, it lacks realism. LucidSim addresses both diversity and realism problems. It's exciting that even without seeing the real world during training, the robot can recognize and navigate obstacles in real environments."

The team is particularly excited about the potential of applying LucidSim to domains outside quadruped locomotion and parkour, their main test bed. One example is mobile manipulation, where a mobile robot is tasked to handle objects in an open area; also, color perception is critical. "Today, these robots still learn from real-world demonstrations," says Yang. "Although collecting demonstrations is easy, scaling a real-world robot teleoperation setup to thousands of skills is challenging because a human has to physically set up each scene. We hope to make this easier, thus qualitatively more scalable, by moving data collection into a virtual environment."

Who's the real expert?
The team put LucidSim to the test against an alternative, where an expert teacher demonstrates the skill for the robot to learn from. The results were surprising: Robots trained by the expert struggled, succeeding only 15 percent of the time - and even quadrupling the amount of expert training data barely moved the needle. But when robots collected their own training data through LucidSim, the story changed dramatically. Just doubling the dataset size catapulted success rates to 88 percent. "And giving our robot more data monotonically improves its performance - eventually, the student becomes the expert," says Yang.

"One of the main challenges in sim-to-real transfer for robotics is achieving visual realism in simulated environments," says Stanford University assistant professor of electrical engineering Shuran Song, who wasn't involved in the research. "The LucidSim framework provides an elegant solution by using generative models to create diverse, highly realistic visual data for any simulation. This work could significantly accelerate the deployment of robots trained in virtual environments to real-world tasks."

From the streets of Cambridge to the cutting edge of robotics research, LucidSim is paving the way toward a new generation of intelligent, adaptable machines - ones that learn to navigate our complex world without ever setting foot in it.

Research Report:Learning Visual Parkour from Generated Images

Related Links
Computer Science and Artificial Intelligence Laboratory (CSAIL)
All about the robots on Earth and beyond!

Subscribe Free To Our Daily Newsletters
Tweet

RELATED CONTENT
The following news reports may link to other Space Media Network websites.
ROBO SPACE
Understanding the sense of self through robotics
Paris, France (SPX) Nov 20, 2024
A recent review published in 'Science Robotics' delves into the complex human concept of the "sense of self" and how robotics could be instrumental in furthering this understanding. The paper, authored by cognitive roboticist Agnieszka Wykowska from the Istituto Italiano di Tecnologia (IIT), cognitive psychologist Tony Prescott of the University of Sheffield, and psychiatrist Kai Vogeley from the University of Cologne, discusses potential applications of robots in modeling and studying this phenomenon. ... read more

ROBO SPACE
PLP launches drone kit for installing bird diverters on power lines

'Record' drone barrage pummels Ukraine as missile tensions seethe

Drones spotted flying near US Air Force bases in UK

Russia and Ukraine trade aerial attacks amid escalation fears

ROBO SPACE
Enormous potential for rare Earth elements found in US coal ash

Bye bye microplastics new plastic is ocean degradable and recyclable

3D-printing advance mitigates three defects simultaneously for failure-free metal parts

Shape memory alloy antenna redefines communication technology

ROBO SPACE
Cooling with light explored through semiconductor quantum dots

Photon qubits advance quantum computing without error correction techniques

A pathway to advanced quantum devices with zinc oxide quantum dots

Rocket Lab secures $23.9M CHIPS Award to boost semiconductor production

ROBO SPACE
Serbia lifts moratorium on nuclear power

Cheers, angst as US nuclear plant Three Mile Island to reopen

Argonne evaluates small modular reactors for Ukraine's economic recovery

Framatome's PROtect fuel achieves key milestone at Gosgen Nuclear Plant in Switzerland

ROBO SPACE
Chinese man sentenced to 20 months for Falun Gong harassment in US

Chemical weapons watchdog says banned gas found in Ukraine samples

Thai military accused of beating Myanmar man to death

Syrians, Iraqis archive IS jail crimes in virtual museum

ROBO SPACE
Contentious COP29 deal casts doubt over climate plans

Ukraine says energy sector 'under massive enemy attack'

Developing nations slam 'paltry' $300 bn climate deal

Biden praises COP29 deal, vows US action despite Trump

ROBO SPACE
Breakthrough in heat-to-electricity conversion demonstrated in tungsten disilicide

A nonflammable battery to power a safer, decarbonized future

Quantum-inspired design boosts efficiency of heat-to-electricity conversion

Engineers develop additive for affordable renewable energy storage

ROBO SPACE
China inflatable space capsule aces orbital test

Tianzhou 7 completes cargo Mission, Tianzhou 8 docks with Tiangong

Zebrafish thrive in space experiment on China's space station

China's commercial space sector expands as firms outline ambitious plans

Subscribe Free To Our Daily Newsletters




The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.