Is AI that is smarter than you catastrophic?

208

Twenty years ago, Stuart Russell co-wrote a book titled Artificial Intelligence: A Modern Approach (AIMA), destined to become the dominant text in its field. Near the end of the book, he posed a question: “What if A.I. does succeed?”

Today, progress toward human-level artificial intelligence (A.I.) is advancing rapidly, and Russell, a professor of computer science, is posing the same question with more urgency.

The benefits of A.I. are not at issue. Safety is. If improperly constrained, Russell warns, a machine as smart as or smarter than humans “is of no use whatsoever — in fact it’s catastrophic.”

Berkeley’s new Center for Human-Compatible Artificial Intelligence, launched this August, will focus on making sure that when A.I. succeeds, its behavior is aligned with human values. Russell leads the center, which aims to establish “Research Priorities for Robust and Beneficial Artificial Intelligence” — that is, provably safe A.I.

The title is from an open letter Russell wrote last year; published by the Future of Life Institute, it emphasized the benefits of powerful artificial intelligence but argued for research that insures those benefits “while avoiding potential pitfalls.” The letter drew more than 8,000 signatories, including many science and technology superstars, Stephen Hawking and Elon Musk among them.

One of the worst pitfalls is also one of the most ancient, Russell says, familiar from such stories as Ovid’s tale of King Midas or Goethe’s Sorcerer’s Apprentice — that of getting not what you want, but exactly what you ask for. Nick Bostrom, author of Ethical Issues in Advanced Artificial Intelligence, provides a modern parable, the “paperclip maximizer” — a smart machine designed to produce as many paperclips as possible. Turn it on, and its first act is to kill you, to prevent being turned off. Its objective leads to a future, Bostrom writes, “with a lot of paper clips, but no humans.”

Russell illustrates, standing at his office whiteboard with marker in hand. “If you think of all the objectives an intelligent system could have” — he sweeps out a wide circle — “here are the ones that are compatible with human life.” He draws a tiny circle inside the big one. “Most of the ways you can put a purpose into a machine are going to be wrong.”

Read the source article at Berkeley Engineering