February 11, 2011

IBM, Supercomputers and Artificial Intelligence

Engineering Intelligence: Why IBM's Jeopardy-playing Computer Is So Important

February 11, 2011

Mashable - Language is arguably what makes us most human. Even the smartest and chattiest of the animal kingdom have nothing on our lingual cognition.

In computer science, the Holy Grail has long been to build software that understands — and can interact with — natural human language. But dreams of a real-life Johnny 5 or C-3PO have always been dashed on the great gulf between raw processing power and the architecture of the human mind. Computers are great at crunching large sets of numbers. The mind excels at assumption and nuance.

Enter Watson, an artificial intelligence project from IBM that’s over five years in the making and about to prove itself to the world next week. The supercomputer, named for the technology company’s founder, will be competing with championship-level contestants on the quiz show Jeopardy!. The episodes will air on February 14, 15 and 16, and if recent practice rounds are any indication, Watson is in it to win it.

At first blush, building a computer with vast amounts of knowledge at its disposal seems mundane in our age. Google has already indexed a wide swath of the world’s codified information, and can surface almost anything with a handful of keywords. The difference is that Google doesn’t understand a question like, “What type of weapon is also the name of a Beatles record?” It may yield some information about The Beatles, or perhaps an article that mentions weapons and The Beatles, but it’s not conceptualizing that the weapon and recording in question have the same name: Revolver.

Achieving this is what makes Watson a contender on Jeopardy!, a quiz known for nuance, puns, double entendres and complex language designed to mislead human contestants. Google Search, or any common semantic software, wouldn’t stand a chance against these lingual acrobatics.

What Watson achieves is, quite frankly, mind boggling. And the rig that sustains it is equally so, with hardware consisting of 90 IBM Power 750 Express servers. Each server utilizes a 3.5 GHz POWER7 eight-core processor, with four threads per core. Top that off with 16 terabytes of RAM, and you’ve got a hearty machine that can almost run Call of Duty: Black Ops without lag.

In seriousness, this computational muscle is what drives IBM’s DeepQA software, the real star of the Watson show. Hundreds of algorithms run simultaneously in order to deduce meaning from a clue, check it against hordes of relevant data, and decide which response is most likely to be correct. Watson then determines if it is “confident” enough in the answer to buzz in at all. The entire process takes place in under three seconds.

This feat of answering “open questions,” as computer scientists call them, puts IBM’s last big AI triumph — the chess-playing, Garry Kasparov-beating Deep Blue — into perspective. While chess is a complex game, the number of legal moves available at any time is finite. Not so, with natural language.

To document this historic leap in computer science, IBM allowed one journalist — Stephen Baker — unmatched access inside its labs. Baker’s new book, Final Jeopardy: Man vs. Machine and the Quest to Know Everything, chronicles Watson from the early days of development to its deployment behind the Jeopardy! podium. The e-book is available now, and to avoid spoilers, readers will be able to download the final chapter, which analyzes Watson’s televised match against Ken Jennings and Brad Rutter, the day after the finale airs (February 17).

We had the opportunity to interview Mr. Baker and discuss what makes Watson tick, as well as the project’s ramifications for the future of artificial intelligence.

Q&A With Author Stephen Baker

Jeopardy clues are filled with puns and double meanings. I imagine these are the most challenging for Watson. In layman's terms, how does it decipher these programmatically?

You’re right. Puns are tough for Watson. It would be hugely impressive if Watson could detect the puns, put them into context, and use them to guide it toward the right response. That’s what humans do. For the most part, though, Watson simply detects a pun -- something that appears to fit awkwardly in the clue -- and then ignores it. It tries to make sense of the clue without its help, or distraction.

This ability to tune out distractions, or noise, is a big part of Watson’s intelligence. For Jeopardy, it’s crucial. Consider this clue from a 1997 game:

“The Bat Cave in this capital of Ontario displays 3,000 life-like vinyl & wax bats in a walk-through tunnel.”

That clue could send Watson on a mad hunt through documents about bats and caves, tunnels, vinyl and wax. But at least one of the algorithms zeros in on the only words that really matter: “capital of Ontario.” And Watson answers, “What is Toronto?”

We've seen and read that Watson is extremely accurate, and has beaten numerous Jeopardy champions in practice matches. In its current incarnation, does it still ever arrive at ridiculous conclusions? That is, does it ever bank on the incorrect meaning of a phrase and end up with what we'd view as a nonsensical response?

In most games it comes up with a few wrong answers, usually one or two of them nonsensical. This is part of the fun, and if we’re lucky it’ll provide some laughs in the showdown with Ken Jennings and Brad Rutter. The problem is that occasionally Watson cannot understand from the clue exactly what it’s supposed to be looking for. The key for IBM researchers is not to eliminate nonsensical responses -- an impossibility -- but to make sure that when Watson’s confused, it keeps its mechanical finger from pressing the buzzer.

There's talk that Watson's natural language software could be used in call centers and other service industries. Do scientists believe that Watson could pass the Turing Test in its current incarnation? Or is this version of the software specifically structured for Jeopardy matches?

Watson cannot pass as a human, even in its specialty of Jeopardy. It’s a great Jeopardy player, but as you’ll see, clearly not human. And when Watson graduates from its quiz game career and looks for work elsewhere, it will continue to act like a machine, albeit a very sophisticated one that understands language.

Can you give us an example of a concept that was deceptively hard to teach Watson -- something the researchers assumed would be straightforward but ended up being a challenge for the AI?

You mentioned the killer word in your question: concept. Watson and other machines don’t master concepts. A four-year-old child somehow figures out that a chihuahua and a great dane are both dogs. Conceptually, the two very different animals have something in common. Computers have to be taught such things. It’s a laborious process. (Watson, I should point out, hasn’t been taught such facts and relationships. It comes up with its responses by studying and analyzing data, most of it written English.)

I was surprised at how it could get tripped up by simple issues. One of the great challenges, for example, is simply to figure out exactly what it should be looking for in a clue. Is it a farm animal, a president, a play? Sometimes it’s not that easy.

Here’s an example: “In 1984 his grandson succeeded his daughter to become his country's prime minister.”

What should the computer look for there? It’s not a grandson or a daughter. It’s not a country. No, it has to figure out that the clue is about a person who embodies two words that go unmentioned: a father and a grandfather. That requires immense sophistication. (The correct response, by the way, is: Who is Nehru?)

As I understand it, Watson's "thought process" consists of large stores of data, natural language processing, and hundreds of simultaneous algorithms that come to multiple conclusions. Then the system weights each answer with what's been dubbed "confidence." Is Watson's confidence based purely on the number of algorithms that arrived at the same conclusion, or is there more to it?

There’s much more to it. The computer has its understanding of what it should be looking for, whether it’s a father, a movie star or a Renaissance painting. It gives more confidence to the responses that seem to fit. What’s more, it determines what kind of clue it is -- historical, a word puzzle, geographic, etc. -- and its confidence grows in algorithms that develop good records in each of those areas. Once it has a handful of responses that appear to be good bets, it does further checking. Is Nehru really a man? Is he really Indian? A grandfather? If these facts bear out, its confidence in that response rises.

In discussing your book with other publications, you've noted that the scientific community is split about whether Watson is the right path for AI development at this time. Why are some upset about the direction of Watson's development?

Lots of computer scientists have devoted their careers to building other types of smart machines -- ones that in many ways could one day be far “smarter” than Watson. Some are teaching machines to reason. A reasoning computer would know, for example that water, at zero degrees Centigrade, turns into ice. And it might be able to draw conclusions based on this fact. Watson, while great with answers, is incapable of such reasoning. So these scientists see one of the great computing research departments on earth dedicating resources to building a quiz show master, and they might be left behind, ignored, under-funded, even resentful.

Other computer scientists believe that the secrets to the circuitry of intelligent machines are hidden in the human brain, and that machines like Watson merely simulate intelligence. I don’t think IBM’s scientists would quibble with that. But they’d note that even if Watson’s thinking is simulated, its answers are the real thing.

When artificial intelligence outsmarts humans, it alarms people. What would you say to those who worry about this type of technology supplanting jobs or becoming part of our emotional lives?

People have good reason to worry about machines supplanting them. That’s what technology does. Tractors, fork lifts, word-processing software, they all took away jobs. And people, with their creative minds, have used them throughout history to figure out where the next jobs will be. For many it’s a painful process. With the advance of machines like Watson, we’ll all have our challenges. The key is to focus on what we do best -- coming up with ideas and theories, grappling with concepts and creating art and technology. And many of the most successful will figure out how to harness machines like Watson as powerful assistants.

If Watson wins Jeopardy next week, how long until it rises up against its creators and enslaves humanity?

That’s an easy one. 42.

No comments:

Post a Comment