Researchers unveil tool to debug 'black box' deep learning algorithms by Staff Writers New York NY (SPX) Oct 26, 2017
Computers can now beat humans at chess and Go, but it may be a while before people trust their driving. The danger of self-driving cars was highlighted last year when Tesla's autonomous car collided with a truck it mistook for a cloud, killing its passenger. Self-driving cars depend on a form of machine learning called deep learning. Modeled after the human brain, layers of artificial neurons process and consolidate information, developing a set of rules to solve complex problems, from recognizing friends' faces online to translating email written in Chinese. The technology has achieved impressive feats of intelligence, but as more tasks become automated this way, concerns about safety, security, and ethics, are growing. Deep learning systems do not explain how they make their decisions, and that makes them hard to trust. In a new approach to the problem, researchers at Columbia and Lehigh universities have come up with a way to automatically error-check the thousands to millions of neurons in a deep learning neural network. Their tool, DeepXplore, feeds confusing, real-world inputs into the network to expose rare instances of flawed reasoning by clusters of neurons. Researchers present it on Oct. 29 at ACM's Symposium on Operating Systems Principles in Shanghai. "You can think of our testing process as reverse engineering the learning process to understand its logic," said co-developer Suman Jana, a computer scientist at Columbia Engineering and a member of the Data Science Institute. "This gives you some visibility into what the system is doing and where it's going wrong." Debugging the neural networks in self-driving cars is an especially slow and tedious process, with no way of measuring how thoroughly logic within the network has been checked for errors. Manually-generated test images can be randomly fed into the network until one triggers a wrong decision, telling the car to veer into the guardrail, for example, instead of away. A faster technique, called adversarial testing, automatically generates test images it alters incrementally until one image tricks the system. DeepXplore is able to find a wider variety of bugs than random or adversarial testing by using the network itself to generate test images likely to cause neuron clusters to make conflicting decisions. To simulate real-world conditions, photos are lightened and darkened, and made to mimic the effect of dust on a camera lens, or a person or object blocking the camera's view. A photo of the road may be darkened just enough, for example, to cause one set of neurons to tell the car to turn left, and two other sets to tell it to go right. Inferring that the first set misclassified the photo, DeepXplore automatically retrains the network to recognize the darker image and fix the bug. Using optimization techniques, researchers have designed DeepXplore to trigger as many conflicting decisions with its test images as it can while maximizing the number of neurons activated. Testing their software on 15 state-of-the-art neural networks, including Nvidia's Dave 2 network for self-driving cars, the researchers uncovered thousands of bugs missed by previous techniques. They report activating up to 100 percent of network neurons - 30 percent more on average than either random or adversarial testing - and bringing overall accuracy up to 99 percent in some networks, a 3 percent improvement on average. Still, a high level of assurance is needed before regulators and the public are ready to embrace robot cars and other safety-critical technology like autonomous air-traffic control systems. One limitation of DeepXplore is that it can't certify that a neural network is bug-free. That requires isolating and testing the exact rules the network has learned. A new tool developed at Stanford University, called ReluPlex, uses the power of mathematical proofs to do this for small networks. Costly in computing time, but offering strong guarantees, this small-scale verification technique complements DeepXplore's full-scale testing approach, said ReluPlex co-developer Clark Barrett, a computer scientist at Stanford. "Testing techniques use efficient and clever heuristics to find problems in a system, and it seems that the techniques in this paper are particularly good," he said. "However, a testing technique can never guarantee that all the bugs have been found, or similarly, if it can't find any bugs, that there are, in fact, no bugs." DeepXplore has application beyond self-driving cars. It can find malware disguised as benign code in anti-virus software, and uncover discriminatory assumptions baked into predictive policing and criminal sentencing software. "We plan to keep improving DeepXplore to open the black box and make machine learning systems more reliable and transparent," said co-developer Kexin Pei, a graduate student at Columbia. "As more decision-making is turned over to machines, we need to make sure we can test their logic so that outcomes are accurate and fair." The team has made their open-source software public for other researchers to use, and launched a website to let people upload their own data to see how the testing process works. "We want to make it easy for researchers to be able to validate their machine learning systems," said co-developer Junfeng Yang, a computer scientist at Columbia Engineering and a member of the Data Science Institute. "Creating the next generation of programming and validation tools for this new computing paradigm will require a collaborative effort that will ultimately benefit society." Adds co-developer Yinzhi Cao, a computer scientist at Lehigh: "Our ultimate goal is to be able to test a system, like self-driving cars, and tell the creators whether it is truly safe, and under what conditions."
Research Report: DeepXplore: Automated Whitebox Testing of Deep Learning Systems
Sussex UK (SPX) Oct 18, 2017 Scientists have invented a way to morph liquid metal into physical shapes. Researchers at the University of Sussex and Swansea University have applied electrical charges to manipulate liquid metal into 2D shapes such as letters and a heart. The team says the findings represent an "extremely promising" new class of materials that can be programmed to seamlessly change shape. This open ... read more Related Links Columbia University School of Engineering and Applied Science All about the robots on Earth and beyond!
|
|
The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us. |