Thursday, 25 April 2024

 

 

LATEST NEWS When the Congress government came, 50% reservation to women and MSP to farmers : Anuma Acharya AAP’s hard hitting attack on Channi: Post 1st June you will be arrested Manish Tiwari will win by a Huge margin: Jarnail Singh Amman is back with Romantic Track 'Dil Kare' Congress is working to divide the country in the name of religion and caste: Dr. Subhash Sharma PM Modi synonymous with trust, hope, credibility: Devender Singh Rana Atal Dulloo reviews the working & Public Outreach activities of Information Department General, Police Observer interact with Zonal, Sectoral Magistrates, BLOs of district Reasi Div Com Jammu, ADGP visit Rajouri, review preparations for elections Lt Governor addresses seminar on National Education Policy 2020 at Ghazipur DC Bandipora Shakeel ul Rehman Rather reviews Floriculture, Fisheries, Sericulture Departments DC Bandipora Shakeel ul Rehman Rather reviews performance of AHD DC Bandipora Shakeel ul Rehman Rather reviews Agriculture Sector DEO Bandipora Shakeel ul Rehman Rather inspects EVM, material strong room DEO Bandipora Shakeel ul Rehman Rather reviews poll preparedness Harnit Singh Sudan (IAS 2023) Interacts with IAS/JKAS Aspirants Marathon under SVEEP held at Samba to maximize voter awareness DEO Kupwara reviews transportation of polling staff, EVMs DEO Kulgam flags-off cycle rally under SVEEP to raise voter awareness 5 more candidates file nominations for Srinagar Lok Sabha seat TV Serial Actor Abhinav Shukla Net Worth 2024 | 5 Dariya News

 

Even smartest AI models don't match human visual processing: Study

Technology, Research, Study, Artificial Intelligence
Listen to this article

Web Admin

Web Admin

5 Dariya News

Toronto , 18 Sep 2022

Deep convolutional neural networks (DCNNs) do not see objects the way humans do using configural shape perception and that could be dangerous in real-world artificial intelligence (Al) applications, say researchers.

DCNNs are the type most commonly used to identify patterns in images and video."Our results explain why deep AI models fail under certain conditions and point to the need to consider tasks beyond object recognition in order to understand visual processing in the brain," said researcher James Elder from York University in Toronto.

"These deep models tend to take 'shortcuts' when solving complex recognition tasks. While these shortcuts may work in many cases, they can be dangerous in some of the real-world AI applications we are currently working on with our industry and government partners," Elder added.

For the study, published in the journal iScience, the team employed novel visual stimuli called "Frankensteins" to explore how the human brain and DCNNs process holistic, configural object properties.

"Frankensteins are simply objects that have been taken apart and put back together the wrong way around. As a result, they have all the right local features, but in the wrong places," Elder said.

The investigators found that while the human visual system is confused by Frankensteins, DCNNs are not revealing an insensitivity to configural object properties.

According to the researchers, modifications to training and architecture aimed at making networks more brain-like did not lead to configural processing, and none of the networks were able to predict trial-by-trial human object judgements accurately.

"We speculate that to match human configural sensitivity, networks must be trained to solve a broader range of object tasks beyond category recognition," Elder noted.

 

Tags: Technology , Research , Study , Artificial Intelligence

 

 

related news

 

 

 

Photo Gallery

 

 

Video Gallery

 

 

5 Dariya News RNI Code: PUNMUL/2011/49000
© 2011-2024 | 5 Dariya News | All Rights Reserved
Powered by: CDS PVT LTD