Johnny Depp plays Will Caster, a brilliant artificial intelligence (AI) researcher who is poisoned by anti-technology terrorists. He's dying, so his wife Evelyn Caster, played by Rebecca Hall, concocts a plan to save him. She uploads her husband’s brain to the Internet.
The result is a super-intelligent machine, that may even be described as conscious. Far-fetched, you say? Well, it turns out that real scientists are taking that possibility very seriously.
“Once you have a machine that's intelligent enough to improve its own software and hardware,” says Stuart Russell, a professor of computer science and engineering at UC Berkeley, “then there’s no limit to how smart it can become. It can add as much hardware as it wants, it can reprogram itself with much better algorithms, and then it rapidly goes far beyond human comprehension and human abilities.”
And Russell admits that this could end badly for humans. “If we make machines that are much smarter than human beings,” he continues, “then it’s very difficult to predict what such a system would do and how to control it.”
The AI field, he says, is going through a “process of realization that 'more intelligent' is not necessarily better.”
“Just like the nuclear physicists,” Russell explains, “when they figured out chain reaction, they suddenly realized, ‘Oh, if we make too much of a chain reaction, then we have a nuclear explosion.' So we need controlled chain reactions, just like we need controlled artificial intelligence.”
Christof Koch, Chief Scientific Officer at the Allen Institute for Brain Science, agrees that our work on artificial intelligence, if successful, "has the risk of being an existential threat to human society."
"Just like once the nuclear genie was out of the bottle, we [always] live under the possibility that within twenty minutes we will be obliterated under a nuclear mushroom," says Koch, "now we live with the possibility that, within the next fifty to a hundred years, this invention that we’re working on might be our final invention and that our future may be bleaker than we think.”
Both Koch and Russell believe the big issue posed by the movie is real, even if the actual science in the film is fairly absurd. Koch says the idea of doing a “brain upload” is not even remotely possible now.
“At this point, we don't even know at what [level] we need to reconstruct the brain," says Koch. "Is it at the molecular level? [The] memory of your first kiss is encoded in the contact points among nerve cells, called synapses, and we have on the order of a quadrillion of them.”
If uploading a brain means reconstructing each molecule within one of those synapses, he says, then we are “almost infinitely far removed from that practical possibility.”
Koch states that scientists can’t even map on a computer the neurons of a worm — an animal with only 302 nerve cells — in a way that accurately simulates the reality of that worm.
The brain upload was also “the most difficult part to swallow” for Stuart Russell, who co-authored the book, Artificial Intelligence: A Modern Approach.
“The brain is incredibly complex and they attach probably twenty-five electrodes to his scalp and they drill into his brain a bit,” Russell says. The brain, however, is not limited to just electrical activity. The brain has a profoundly deep structure — billions of neurons and trillions of synapses and all their connections and details.
“We can surely in the future ... program entities so that they behave as if they have conscious feelings, and they say they have conscious feelings — of love or trust or anger — but how do we really know?” Koch asks. “For that we need a theory of consciousness, and there is no agreed-upon theory of consciousness right now.”
In the film, after Dr. Will Caster’s brain has been uploaded, one of the other scientists asks Caster's computer image: “Can you prove that you're self-aware?” "That's a difficult question," the virtual Dr. Caster shoots back. "Can you prove that you are?"
Koch says we can never know for sure whether another being is truly conscious. All we can rely on is that person telling us he is conscious.
“No one I know in AI is seriously trying to build a conscious machine,” says Stuart Russell, “because we simply we don't know how. If someone gave me a trillion dollars to do it, I would just give it back, because we really have absolutely no idea what consciousness is."
He says that the brain operates because of molecules moving, electrical charges passing, and so on. "But we can't explain why it is that certain electrical charges and chemical movements constitute consciousness whereas other electrical charges don’t.”
Still, Koch says, scientists are getting serious about the topic.
“Whereas before,” he says, “only philosophers used to worry about consciousness, now there are scientists — neuroscientists, computer scientists, physicists — who ask the question, ‘What does it take for a physical system like our brain to be conscious and under what circumstances can we recreate these subjective feelings? To what extent do other physical systems like a computer have these feelings of self-consciousness or self-awareness, as in the movie?”
Russell and Koch agree there is an increasing awareness within the scientific community of the responsibility to think seriously about the consequences of creating conscious machines in the future.
“If you’re working in a field whose success would probably be the biggest event in human history, and, as some people predict, the last event in human history,” Russell says, “then you’d better take responsibility.”
“What I'm finding,” he continues, “is that senior people in the field, who have never publicly evinced any concern before, are privately thinking that we do need to take this issue very seriously — and the sooner we take it seriously, the better."
Just look back to those who unleashed the power of the atom, he says. "I think the nuclear physicists wish they had taken [the social impact] seriously much earlier than they did.”