"If you control the code, you control the world," security adviser Marc Goodman said in a 2012 a TED Talk. But what happens when humans no longer control the code?
Today, coding is being disrupted by something called "machine learning." With traditional coding, an engineer writes specific instructions for a computer program to follow. But with machine learning, a programmer "trains" the computer program to do its job by feeding it a bunch of data. Extremely complicated equations take care of the rest â even the programmers don't totally understand how the process works.
Are we relinquishing our power when we teach machines, instead of programming them? Jason Tanz, editor-at-large for WIRED magazine, took a deep dive into machine learning for the publication's June cover story,"The End of Code and Future of A.I."
âThe idea for this story came about from a conversation I was having with Andy Rubin, who is one of the founders of Android and a big A.I. geek,â says Tanz. âHe was saying that while heâs very excited about the coming age of machine learning, he was a little saddened by it because, as a programmer, he really loved getting under the hood and writing instructions, and having command over this world. Now with machine learning, itâs a much more abstracted kind of control â you canât, as he put it, cut off a head and look inside and see how the brain works.â
Though he doesnât believe coding will go away anytime soon, Tanz does believe things in the tech world are changing rapidly.
âThe coding-based worldview â where everything is kind of understandable and you can break everything down into parts, and once you understand an algorithm you can tweak it and optimize it â that idea is going to go away more and more as machine learning takes up more and more of the computing that we do,â he says.
How will breakthroughs in artificial intelligence transform human activity and impact decision making in government and society? These are questions that Joshua Cooper Ramo has spent years examining. He's the author of "The Seventh Sense: Power, Fortune and Survival in the Age of Networks."
âWeâre sort of entering an era where the initial age of networks that we experience is about to get much more complex,â says Ramo. âFirst of all, the networks are going to be instant â theyâre going to be what we call âZero Latency Networks,â so things will happen very, very quickly. And part of the nature of that is things will be happening so fast that youâll need to use A.I. and youâll need to use machines to solve problems because we want more and more speed.â
When looking back, Ramo says some of the great inventions of the Industrial Revolution â trains, planes and ships, for example â were built to compress space and distance. In the modern era, our drive to create ever-faster networks caters to our desire to compress time.
âIn our age, we need this kind of new sense for what it means to be constantly enmeshed. Part of that gets to this very important issue of, âWhere is the human in the loop?ââ he says.
While advancements in artificial intelligence can help humans manipulate speed and time, A.I. may also impact the very fabric of our democracy.
âWeâre in the midsts of what I think is a profound network election in many ways, and the way in which people are thinking about the world is exactly this issue: The ability to make a distinction between what they see on their social network and what actually goes on in the real world is sort of disappearing," Ramo says.
The evolution of artificial intelligence may also have extreme consequences for the world of science.
âWe live in a world with increasing, what I call in the book âblack boxingâ of systems â things disappear into these algorithms in ways that we canât understand,â Ramo says. âThe Scientific Revolution was really about humans discovering the answers to questions ourselves with our own minds. Where we are today is a point where the machines may be able to solve scientific problems or even commercial problems or maybe economic problems better than any human can, and we wonât know how they got the answer.â
So are we just dumbed down robots? Ramo says that question can only be determined by looking at power structures.
âThe guys who control the Google algorithm and who control the Facebook algorithm, they have more power arguably than any group in human history,â he says. âOn what basis are they accountable? How do they make decisions? How are those algorithms programmed? The fundamental problem is most of us canât understand that.â
As the recent controversy surrounding Facebook shows, algorithms arenât always objective.
âEverything is a reflection of the ethics, somehow, of its creator, and thatâs true for algorithms as much as anything,â says Tanz. âI think thereâs real concern in the A.I. community that if thereâs only a certain chunk of society that is creating A.I. and embedding it with their understanding of the world, that is going to have a massive influence on how technology develops, and therefore how society develops.â
This story first aired as an interview on PRI's The Takeaway, a public radio program that invites you to be part of the American conversation.
Sign up for The Top of the World, delivered to your inbox every weekday morning.