Programming tweak helps AI software imitate human visual learning by Brooks Hays Washington DC (UPI) Jan 12, 2021
Using a novel programming tweak, a pair of neuroscientists have managed to replicate human visual learning in computer-based artificial intelligence. The tweak, described Tuesday in the journal Frontiers in Computational Neuroscience, yielded a model capable learning new objects faster than earlier AI programs. "Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples," lead study author Maximilian Riesenhuber said in a news release. "We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing," said Riesenhuber, a professor of neuroscience at Georgetown University Medical Center. At three or four months, human babies are building categories to make sense of the world and its many visual inputs. For example, with limited examples, babies can learn to recognize and differentiate zebras from other animals. Computers, on the other hand, must process a large number of visual examples of an object before they're able to recognize it. Traditional AI learning models rely on basic information, like shape and color. But to improve the AI learning process, Riesenhuber and research partner Joshua Rule, a postdoctoral scholar at the University of California, Berkeley, programmed an AI model to ignore low-level data and instead focus on relationships between entire visual categories. "The computational power of the brain's hierarchy lies in the potential to simplify learning by leveraging previously learned representations from a databank, as it were, full of concepts about objects," Riesenhuber said. The researchers programmed their artificial neural network to use a more sophisticated approach to visual processing and learning, relying on its previously acquired visual knowledge. Their programming tweak helped the AI network learn to recognize new objects much faster. "Rather than learn high-level concepts in terms of low-level visual features, our approach explains them in terms of other high-level concepts," Rule said. "It is like saying that a platypus looks a bit like a duck, a beaver, and a sea otter." Based on brain imaging and object recognition experiments with human subjects, neuroscientists have previously theorized that the anterior temporal lobe of the brain powers an ability to recognize abstract visual concepts. This allows humans to learn new objects by analyzing relationships between entire visual categories. Instead of starting from scratch each time humans are tasked with learning new objects, these complex neural hierarchies allow humans to leverage prior learning. "By reusing these concepts, you can more easily learn new concepts, new meaning, such as the fact that a zebra is simply a horse of a different stripe," Riesenhuber said. Computers have been programmed to beat humans at chess and other sophisticated logic games, but the human brain's ability to quickly process visual information remains unmatched. "Our findings not only suggest techniques that could help computers learn more quickly and efficiently, they can also lead to improved neuroscience experiments aimed at understanding how people learn so quickly, which is not yet well understood," Riesenhuber said.
Using light to revolutionize artificial intelligence Quebec City, Canada (SPX) Jan 12, 2021 An international team of researchers, including Professor Roberto Morandotti of the Institut national de la recherche scientifique (INRS), just introduced a new photonic processor that could revolutionize artificial intelligence, as reported by the prestigious journal Nature. Artificial neural networks, layers of interconnected artificial neurons, are of great interest for machine learning tasks such as speech recognition and medical diagnosis. Actually, electronic computing hardware are nearing t ... read more
|
|
The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us. |