Robot Technology News  
ROBO SPACE
Researchers unveil tool to debug 'black box' deep learning algorithms
by Staff Writers
New York NY (SPX) Oct 26, 2017


A debugging tool developed by researchers at Columbia and Lehigh generates real-world test images meant to expose logic errors in deep neural networks. The darkened photo at right tricked one set of neurons into telling the car to turn into the guardrail. After catching the mistake, the tool retrains the network to fix the bug.

Computers can now beat humans at chess and Go, but it may be a while before people trust their driving. The danger of self-driving cars was highlighted last year when Tesla's autonomous car collided with a truck it mistook for a cloud, killing its passenger.

Self-driving cars depend on a form of machine learning called deep learning. Modeled after the human brain, layers of artificial neurons process and consolidate information, developing a set of rules to solve complex problems, from recognizing friends' faces online to translating email written in Chinese.

The technology has achieved impressive feats of intelligence, but as more tasks become automated this way, concerns about safety, security, and ethics, are growing. Deep learning systems do not explain how they make their decisions, and that makes them hard to trust.

In a new approach to the problem, researchers at Columbia and Lehigh universities have come up with a way to automatically error-check the thousands to millions of neurons in a deep learning neural network.

Their tool, DeepXplore, feeds confusing, real-world inputs into the network to expose rare instances of flawed reasoning by clusters of neurons. Researchers present it on Oct. 29 at ACM's Symposium on Operating Systems Principles in Shanghai.

"You can think of our testing process as reverse engineering the learning process to understand its logic," said co-developer Suman Jana, a computer scientist at Columbia Engineering and a member of the Data Science Institute. "This gives you some visibility into what the system is doing and where it's going wrong."

Debugging the neural networks in self-driving cars is an especially slow and tedious process, with no way of measuring how thoroughly logic within the network has been checked for errors.

Manually-generated test images can be randomly fed into the network until one triggers a wrong decision, telling the car to veer into the guardrail, for example, instead of away. A faster technique, called adversarial testing, automatically generates test images it alters incrementally until one image tricks the system.

DeepXplore is able to find a wider variety of bugs than random or adversarial testing by using the network itself to generate test images likely to cause neuron clusters to make conflicting decisions. To simulate real-world conditions, photos are lightened and darkened, and made to mimic the effect of dust on a camera lens, or a person or object blocking the camera's view.

A photo of the road may be darkened just enough, for example, to cause one set of neurons to tell the car to turn left, and two other sets to tell it to go right. Inferring that the first set misclassified the photo, DeepXplore automatically retrains the network to recognize the darker image and fix the bug.

Using optimization techniques, researchers have designed DeepXplore to trigger as many conflicting decisions with its test images as it can while maximizing the number of neurons activated.

Testing their software on 15 state-of-the-art neural networks, including Nvidia's Dave 2 network for self-driving cars, the researchers uncovered thousands of bugs missed by previous techniques. They report activating up to 100 percent of network neurons - 30 percent more on average than either random or adversarial testing - and bringing overall accuracy up to 99 percent in some networks, a 3 percent improvement on average.

Still, a high level of assurance is needed before regulators and the public are ready to embrace robot cars and other safety-critical technology like autonomous air-traffic control systems. One limitation of DeepXplore is that it can't certify that a neural network is bug-free. That requires isolating and testing the exact rules the network has learned.

A new tool developed at Stanford University, called ReluPlex, uses the power of mathematical proofs to do this for small networks. Costly in computing time, but offering strong guarantees, this small-scale verification technique complements DeepXplore's full-scale testing approach, said ReluPlex co-developer Clark Barrett, a computer scientist at Stanford.

"Testing techniques use efficient and clever heuristics to find problems in a system, and it seems that the techniques in this paper are particularly good," he said. "However, a testing technique can never guarantee that all the bugs have been found, or similarly, if it can't find any bugs, that there are, in fact, no bugs."

DeepXplore has application beyond self-driving cars. It can find malware disguised as benign code in anti-virus software, and uncover discriminatory assumptions baked into predictive policing and criminal sentencing software.

"We plan to keep improving DeepXplore to open the black box and make machine learning systems more reliable and transparent," said co-developer Kexin Pei, a graduate student at Columbia. "As more decision-making is turned over to machines, we need to make sure we can test their logic so that outcomes are accurate and fair."

The team has made their open-source software public for other researchers to use, and launched a website to let people upload their own data to see how the testing process works.

"We want to make it easy for researchers to be able to validate their machine learning systems," said co-developer Junfeng Yang, a computer scientist at Columbia Engineering and a member of the Data Science Institute.

"Creating the next generation of programming and validation tools for this new computing paradigm will require a collaborative effort that will ultimately benefit society."

Adds co-developer Yinzhi Cao, a computer scientist at Lehigh: "Our ultimate goal is to be able to test a system, like self-driving cars, and tell the creators whether it is truly safe, and under what conditions."

Research Report: DeepXplore: Automated Whitebox Testing of Deep Learning Systems

ROBO SPACE
Liquid metal brings soft robotics a step closer
Sussex UK (SPX) Oct 18, 2017
Scientists have invented a way to morph liquid metal into physical shapes. Researchers at the University of Sussex and Swansea University have applied electrical charges to manipulate liquid metal into 2D shapes such as letters and a heart. The team says the findings represent an "extremely promising" new class of materials that can be programmed to seamlessly change shape. This open ... read more

Related Links
Columbia University School of Engineering and Applied Science
All about the robots on Earth and beyond!


Thanks for being here;
We need your help. The SpaceDaily news network continues to grow but revenues have never been harder to maintain.

With the rise of Ad Blockers, and Facebook - our traditional revenue sources via quality network advertising continues to decline. And unlike so many other news sites, we don't have a paywall - with those annoying usernames and passwords.

Our news coverage takes time and effort to publish 365 days a year.

If you find our news sites informative and useful then please consider becoming a regular supporter or for now make a one off contribution.
SpaceDaily Contributor
$5 Billed Once


credit card or paypal
SpaceDaily Monthly Supporter
$5 Billed Monthly


paypal only


Comment using your Disqus, Facebook, Google or Twitter login.

Share this article via these popular social media networks
del.icio.usdel.icio.us DiggDigg RedditReddit GoogleGoogle

ROBO SPACE
Drone Aviation awarded contract for Enhanced WASP Tactical Aerostat from US Defense Dept

Death toll from US drone strike in Pakistan rises to 26: officials

UK will not confirm drone death of IS 'White Widow' recruiter

New long range drones expected in 2018

ROBO SPACE
The drop that's good to the very end

Study shows how rough microparticles can cause big problems

Selective memory makes data caches 50 percent more efficient

Electrode materials from the microwave oven

ROBO SPACE
Research team led by NUS scientists breaks new ground in memory technology

Researchers bring optical communication onto silicon chips

Bridging the terahertz gap

Liquid metal discovery ushers in new wave of chemistry and electronics

ROBO SPACE
MATRIX pitched as a game changer for used fuel dry storage

South Korea to push ahead with nuclear power plants

AREVA NP awarded contract for safety upgrades in seven reactors

AREVA NP installs a system allowing flexible electricity generation at Goesgen nuclear power plant

ROBO SPACE
'Israeli strikes' kill 10 IS-linked Syria fighters: monitor

Five-month battle with IS ends in Philippine city: defence chiefs

Egypt hits suspected militants smuggling arms from Libya

NATO chief hails defeat of IS from Raqa stronghold

ROBO SPACE
Japan faces challenges in cutting CO2, Moody's finds

IEA: An electrified world would cost $31B per year to achieve

'Fuel-secure' steps in Washington counterintuitive, green group says

SLAC-led project will use AI to prevent or minimize electric grid failures

ROBO SPACE
Electronic entropy enhances water splitting

Scientists solve a magnesium mystery in rechargeable battery performance

Ames UConn team discover superconductor with bounce

PPPL takes detailed look at 2-D structure of turbulence in tokamaks

ROBO SPACE
China launches three satellites

Mars probe to carry 13 types of payload on 2020 mission

UN official commends China's role in space cooperation

China's cargo spacecraft separates from Tiangong-2 space lab









The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.