Science: The mind machine
Igor Aleksander has created a real-life successor to Hal. Charles Arthur hears how
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.IMAGINE a banana. What colour is it? Yellow, of course. Now try to form a picture of one that doesn't exist, that can't exist: a blue banana with red spots. Imagine that.
How did you do? If you found it hard, perhaps you ought to know that Igor Aleksander has a machine which can do that easily. When he asks it (in words) to produce an image of "banana" that is "blue with red spots", the image swims on to the screen in seconds.
This, says Professor Aleksander, is indicative that the computer has something which scientists and computer engineers have been struggling towards for more than 50 years: machine consciousness. Yes, the same thing that marked out Hal, the computer in 2001: A Space Odyssey, and the robots of Isaac Asimov's science fiction.
At the moment, this machine consciousness can only categorise and imagine things in a limited domain. It knows what two-dimensional images of cats, butterflies, and mice look like. It also knows what things that are red, yellow, blue, green, and indeed blue with red spots look like. Give it an image of something it has never seen before, and it will try to categorise it. Equally, ask it to picture something it has not seen, but has the "language" for - such as a blue cat - and it will.
That might not sound like a lot. But it is actually an essential breakthrough, because, as Professor Aleksander points out, the ability to recognise "redness" - or any other sort of -ness - is something that philosophers have long maintained is the province only of conscious beings. And now he has achieved it on a humble PC.
"Philosophers call it the 'qualia' - the essence, the quality - of a thing," he explains. "A red boat, a red cat, both have 'redness'. They say it can't simply be something in the neurons." Yet he can observe the part of the system which observes colour decide that something is red, or reddish, while other parts haven't decided what the object actually is.
That separation of processing is another key part of consciousness, he thinks. "It's an emergent property of neural centres which interact," he says. (An "emergent property" is behaviour which only becomes apparent when you have sufficiently many individual components acting at the same time. For instance, a hundred neurons gives you nothing; a hundred billion, a human being.)
Though Professor Aleksander has been researching this field of artificial intelligence for 30 years, this breakthrough by his team at Imperial College has only been made in the past six months. The key, he says, lies in creating a set of neural networks complex enough that they can mimic the action of part of the human brain.
Neural networks are computer analogues of the neurons in our brains: they receive inputs from a number of sources, and, depending on what it is "taught" to recognise, produce a certain output. For example, a neuron in your brain or a neural network in a computer whose function is to detect yellow in a scene will "fire" if its input includes the visual representation of a banana, or a sodium streetlight.
By building neural networks up and interlinking them to create more and more complex feedback, you eventually produce a system whose rules are literally unknown. No person has programmed them. All you know is how it reacts.
Professor Aleksander's team has produced the software equivalent of 250,000 neurons with four million connections. The advantage of his machine-based version is speed - "the neurons in our brain only fire about 100 times a second". Using a 200 mhz PC - with the processor "firing" 200 million times a second - leaves headroom for the programs necessary to create artificial neurons. "The speed advantage lets us model things that go on in the brain even though the number of cells is smaller," he says.
The system he has set up is a combination of vision and linguistic representation. The "visual" network (a 64 by 64 grid onscreen) is shown a picture; the "language" network is told that is a cat; the "pattern" network that it is red. After about an hour's tuition, it can recognise all sorts of cats and other objects, in all sorts of colours - and even imagine them in impossible colours.
The discovery, he says, is that the essential element for consciousness is a feedback system between at least two such "modalities". In humans, we have five - at least - modalities. We call them senses.
In building his system, he says, "you end up with a virtual machine which becomes artificially conscious of its virtual world, the one that you expose it to in the machine. But you could easily move that into a robot."
Instead of showing the robot screen images, you could hook up a digital camera to its input. With sufficient education about the "names" of things it was seeing, you would develop a sentient robot. "It will develop a sense of 'self'," Professor Aleksander says. "It can develop an internal representation of its own effect on the world."
One might argue that Professor Aleksander is cheating - that the machine is being given a language, and told what the answers are. But the words used for the objects are more for our convenience, so we can observe the system deciding something is red. The neural network has already determined what that something is; all it needs is a label to hang on it. After all, parents teach children the names of objects in the same way: a child is conscious and has the capability to learn, but needs a common language to communicate.
Does this mean then that language is a prerequisite of consciousness? "An object that has a language system will have greater consciousness than one that doesn't. But it's not a prerequisite. You just need more than one modality."
So what would a machine that was conscious of the outside world, and us, be like? Would we like them? Would they like us? Might conscious machines become cleverer than their makers? "My pocket calculator is cleverer than me - in its particular domain. You'll have robots that are more dextrous, or better able to search Mars than humans. But whether they will solve philosophical problems is another matter ... Maybe I'm being an arrogant human; but I don't know where this leap into greater overall 'smartness' would come from. I think they'll have peculiar characteristics - they'll use language very well, yet have the sentience of a slug."
And what about fears that they might run amok and slay us? "All the science fiction tales give the machine elements which aren't about consciousness, but about being human - such as ambition."
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments