Humans and artificial intelligence ‘see’ objects in same way, researchers discover
Breakthrough could help AI researchers attempting to replicate human vision
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Researchers have discovered a “spooky” similarity between how human brains and artificial intelligence computers see three-dimensional objects.
The discovery is a significant step towards better understanding how to replicate human vision with AI, said scientists at Johns Hopkins University who made the breakthrough.
Natural and artificial neurons registered nearly identical responses when processing 3D shape fragments, despite the artificial neurons being trained using images on two-dimensional photographs.
The AlexNet AI network unexpectedly responded to the images in the same way as neurons that are found within an area of the human brain called V4, which is the first stage in the brain’s object vision pathway.
“I was surprised to see strong, clear signals for 3D shape as early as V4,” said Ed Connor, a neuroscience professor at the Zanvyl Krieger Mind/Brain Institute at Johns Hopkins University.
“But I never would have guessed in a million years that you would see the same thing happening in AlexNet, which is only trained to translate 2D photographs into object labels.”
Professor Connor described a “spooky correspondence” between image response patterns in natural and artificial neurons, especially given that one is a product of thousands of years of evolution and lifetime learning, and the other is designed by computer scientists.
“Artificial networks are the most promising current models for understanding the brain,” Professor Connor said.
“Conversely, the brain is the best source of strategies for bringing artificial intelligence closer to natural intelligence.”
A research paper detailing the discovery was published in the scientific journal Current Biology on Thursday.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments