Computers 'to match human brains by 2030'
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Computer power will match the intelligence of human beings within the next 20 years because of the accelerating speed at which technology is advancing, according to a leading scientific "futurologist".
There will be 32 times more technical progress during the next half century than there was in the entire 20th century, and one of the outcomes is that artificial intelligence could be on a par with human intellect by the 2020s, said the American computer guru Ray Kurzweil.
Machines will rapidly overtake humans in their intellectual abilities and will soon be able to solve some of the most intractable problems of the 21st century, said Dr Kurzweil, one of 18 maverick thinkers chosen to identify the greatest technological challenges facing humanity.
Dr Kurzweil is considered one of the most radical figures in the field of technological prediction. His credentials stem from being a pioneer in various fields of computing, such as optical character recognition – the technology behind CDs – and automatic speech recognition by machine.
His address yesterday to the American Association for the Advancement of Science (AAAS) portrayed a future where machine intelligence will far surpass that of the human brain as they learn how to communicate, teach and replicate among themselves.
Central to his thesis is the idea that silicon-based technology follows the "law of accelerating returns". The computer chip, for instance, has doubled in power every two years for the past half century, which has led to an ever- accelerating progression – and miniaturisation – in all chip-based technologies.
Dr Kurzweil told the annual meeting of the AAAS in Boston: "The paradigm shift rate is now doubling every decade, so the next half century will see 32 times more technical progress than the last half century. Computation, communication, biological technologies – for example, DNA sequencing – brain scanning, knowledge of the human brain, and human knowledge in general are all accelerating at an ever-faster pace, generally doubling price-performance, capacity and bandwidth every year."
Computers have so far been based on two-dimensional chips made from silicon, but there are developments already well advanced to make three-dimensional chips with vastly improved performances, and even to construct them out of biological molecules that can be miniaturised even more than metal-based computer chips.
"Three-dimensional, molecular computing will provide the hardware for human-level 'strong artificial intelligence' by the 2020s. The more important software insights will be gained in part from the reverse engineering of the human brain, a process well under way. Already, two dozen regions of the human brain have been modelled and simulated," he said.
Although the brain cannot match computers in terms of the straight storage and retrieval of information, it has an unrivalled capacity of associating different strands of information, to look ahead and plan, as well as performing the imaginative creativity that is at the heart of human existence. But Dr Kurzweil is one of several computer scientists who believe that computers are well on the way to creating a "post-human" world where a second, intelligent entity exists alongside people.
"Once non-biological intelligence matches the range and subtlety of human intelligence, it will necessarily soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge," Dr Kurzweil said.
"We are understanding disease and ageing processes as information processes, and are gaining the tools to reprogramme them. RNA interference, for example, allows us to turn selected genes off, and new forms of gene therapy are enabling us to effectively add new genes. Within two decades, we will be in a position to stop and reverse the progression of disease and ageing resulting in dramatic gains in health and longevity," he added.
Rise of the machines
The history of "artificial" intelligence goes back to classical times, although of course it was never called by that name. The Greek myths of Hephaestus and Pygmalion incorporate the idea of intelligent machines that take on human form. We would call them robots.
Mary Shelley took up the theme of man trying to create a living image of himself in her story of Frankenstein's monster, but the word "robot" did not enter the English language until Karel Capek's 1923 play R.U.R., which stood for Rossum's Universal Robots. The idea of a machine being able to match the intelligence of humans was explored in the 1940s by the great English mathematician Alan Turing, below, who devised his test of artificial intelligence. In a seminal scientific paper published in 1950, Turing came up with a practical solution to the problem – the Turing test. Turing said that a machine would be deemed to have passed the test if human beings could interact with it as they would with another person.
The term "artificial intelligence" (AI) was first coined by the computer scientist John McCarthy in 1956, and the concept was explored in the 1950s and 1960s by the likes of Marvin Minksy, of the Massachusetts Institute of Technology.
The science fiction writer Arthur C Clarke drew on the concept of AI in his book 2001: A Space Odyssey, which featured an intelligent computer called HAL that was an intellectual match for man.
By the mid-1970s the financial backers of the AI industry became disillusioned over its inability to match the human brain. But then, on 11 May 1997, the IBM computer Deep Blue became the first machine to beat a reigning world chess champion. This was soon followed by other "intelligent" feats such as the robot car driver which drove 131 miles along an unrehearsed desert trail.
AI, portrayed in films such as Blade Runner and The Terminator, was on a roll again.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments