jeudi 23 mai 2013

IBM’s Watson Tries to Learn…Everything

A lire sur:  http://spectrum.ieee.org/podcast/robotics/artificial-intelligence/ibms-watson-tries-to-learneverything/

What happens when Watson learns a million databases? RPI students and faculty hope to find out



Steven Cherry: Hi, this is Steven Cherry for IEEE Spectrum’s “Techwise Conversations.”
Computers aren’t just getting better, they’re getting smarter. Sixteen years ago, a software program beat the reigning chess champion. IBM had spent seven years creating it, and it was time well spent. The victory got the world’s attention and proved that superior computation skills could at least sometimes add up to superior performance.
Two years ago, IBM’s Watson software beat the world’s two best players in the television game show “Jeopardy!” Although “Jeopardy!” is a test of trivia, the victory was anything but trivial. It showed how well artificial intelligence researchers could process ordinary language and extract knowledge from unstructured databases.
Since then, Watson has been put to work learning something a lot less trivial—medical diagnosis. But that’s still a very limited domain—in fact, it’s restricted to cancer diagnoses so far.
But IBM is also looking to the long term. It has given one of the world’s leading AI researchers, at a leading university for AI, an open-ended three-year charter to make Watson smarter.
My guest today is Jim Hendler. He’s a professor of computer science and cognitive science at Rensselaer Polytechnic Institute, in Troy, N.Y. He’s been a key researcher in the related fields of knowledge discovery, software agents, and the Semantic Web. He’s also worked on autonomous mobile robots, another area where we want machines to be as intelligent as possible, or maybe not. He joins us by phone.
Jim, welcome to the podcast.
Jim Hendler: Thanks very much, Steve. Good to be here.
Steven Cherry: I called this an open-ended three-year charter to make Watson smarter. Is that correct?
Jim Hendler: That’s pretty much correct. Let me start by saying it’s not just me. There are a few of us involved, and we’re really looking at Watson in several different ways. One is extending capabilities. Two is just understanding how it does what it does in a more academic computer-science-theory sort of way, and then really trying to see what else we can use it for, what kind of extensions to it will make it more useful across a wider range of things.
Steven Cherry: So your students and colleagues will tackle a wide variety of problems, but there’s one that interests you personally, and that’s the thousands and thousands of open data sets around the world. What do you have in mind for Watson?
Jim Hendler: You know, this is something that is interesting for a number of different reasons. The governments around the world have been releasing what’s called open data, really data sets that they’ve collected using generally taxpayer or citizen funds to do the work of government, and those things have traditionally come into government and only been released back in the form of reports.
But in the past few years, it’s becoming more and more powerful to actually release the data and let third parties either build things for governments or create innovative applications, et cetera. So, for example, just last week the president of the United States announced a new executive order that all data released by the government has to be released in machine-readable formats, at least by the executive branch.
So that’s leading to literally hundreds of thousands of data sets being released, and when you look at a data set, all you see is sort of a bunch of numbers and the titles of the fields. So to understand what the data is about, to find which data in these huge collections of data, is a very hard problem.
So what we want to do is use Watson’s capabilities to put together the descriptive unstructured part, so the thing that says what the data set does, the metadata—so the data about the data set—when it was released, by who, and for what purpose, and some of the things we can find actually in the data. So you’d be able to ask Watson questions like, “What data set can help me learn about obesity in Europe?” or maybe, more importantly, “What data set can help me get a job in Billings, Montana?”
Steven Cherry: So, one of the challenges here is that there are no real standards for data and very few protocols that are specific to data transfer and data sort of aggregation?
Jim Hendler: Yeah, so there’s issues to do with standards, but the real issue is semantics. So the way that a search engine gets its power is it can find words on pages and how they correlate with each other. So if I see the word tank, and somewhere else near it on the page is the word fish, I get one concept, whereas if it’s the word tank and somewhere near it is soldier or army, I get a different concept.
Problem is, if I’m looking at a data set and see a number 13, and somewhere near it is the number 27, and then some other data set 13, and the number near it is 1026, are they the same? Are they different? If I know that those numbers represent class numbers or identifiers of people or ages, then suddenly I start to understand a little bit about whether these are linked together. We use the term linked data nowadays for a lot of this.
So the question is, can Watson take advantage of the linked data, the unstructured descriptions, and, again, this information about the data we know: what country released it, when they released it, what agency it was from.
Steven Cherry: So to go back to your obesity example, I guess at one data set there might be a column for weight, and at another data set there’s a column for body-mass index, and it’s only the semantics that gives Watson a clue that these are sort of generally about the same thing.
Jim Hendler: Right. And generally, furthermore, knowing one is from Europe, it’s probably in kilograms, and the other one is from the U.S., so it’s probably in pounds, and a lot of these things that seem easy to us as people are very subtle when you try to do them at scale with a computer.
Steven Cherry: You’re also going to point Watson at social media: blog posts, Facebook postings, tweets. What do you think Watson will make of that?
Jim Hendler: So the trick is—the tweet itself is, let’s just take the tweets for a minute, are very short, so I’ve got this 140 characters, a lot of abbreviations, things like that, so there’s some language challenges there. Then there are the hash tags, which give you some information, but they’re not unique.
Then you’ve got who tweeted it, when, and there’s a lot of other things: where, what language was it in. So some of those things can also be extracted, sentiment. So what we want to do is say, “Can you put all that together?” So as the tweets come in, a lot of information is kept, and then you’d be able to pull some of this out.
Now, Watson is great at answering questions about individual things. It’s not really good yet without some extensions at doing things like complex reasoning over that. So if you said, “Are there more tweets that like x than dislike x?” that’s not going to work very well in Watson right now. But if you said, “Has anyone tweeted about the following event recently?” you might be able to find a list of those tweets or something, and again you’d hand that list to something else to process for what do the trends look like and things like that. So, again, it’s using Watson to extract information, pull the relevant things out, then you may have to use other processing as well.
Steven Cherry: I wish you luck. I mean, the database stuff sounds hard enough, but at least they’re not filled with sarcasm and metaphor and affect.
Jim Hendler: Yeah, well, that’s true, but so is the things that Watson used to play “Jeopardy!,” so we’re hoping that will help. But, again, I’d say the Twitter one is more to explore how we put together the different kinds of texts, and most importantly, can you take the things that are in Twitter, whether it’s the URLs that people point at or the hash tags and things, and use that to help you understand what’s going on?
So a tweet may not be very long, but if it’s talking about an article or points at a blog or something, then you can start using language tools there. So it’s more about how can you put together this mass of information into a memory that you can use for things, and that’s really the theme of a lot of the Watson research—is memory-based reasoning.
Steven Cherry: So, I realize it’s still early days there, but are there some things that the students and the faculty there already have in mind?
Jim Hendler: You know, we have a list of things. The ones that I’ve talked about, we’re starting with the data stuff and looking toward social media. We’re looking toward some new application areas, and then we’re looking at some cognitive research.
So, again, what does Watson tell us about how we reason, right? The fact that Watson was able to beat—so Ken Jennings is to “Jeopardy!” as Michael Jordan is to basketball. If you beat Michael Jordan at basketball, you’ve learned something about basketball playing, and we want to say what we have learned about question answering, about how people think, about how Ken Jennings is able to do this amazingly broad reasoning that he does, just pulling facts from everything.
And so we’ll also be looking at that more cognitive side. And so we have groups of professors that are looking at all these different things and figuring out how we’re going to proceed in this kind of research.
Steven Cherry: You told my producer you were interested in exploring man-machine collaboration. What are some things that humans and computers can do together that computers can’t do by themselves?
Jim Hendler: Sure. Well, let’s start with a simple observation: If you go back and watch Watson playing “Jeopardy!” against Ken, there are questions where either Ken beeps because Watson wasn’t fast enough or where Watson had among its three answers the right answer, but it wasn’t either its top answer or it wasn’t good enough that Watson wanted to make that guess.
What’s interesting is that if I watch that, and I’m a mediocre “Jeopardy!” player, but me and Watson would have gotten more questions correct than Watson did alone. And I’m now taking time out of it, and buzzers, and things like that. So clearly, the reasoning process I use, and the reasoning process the computer uses, are very different, and so even though it can play “Jeopardy!” better than I can, together we can do better.
Now, use that same metaphor for lots of other things. So, I think that, and have always thought that, when you can take the power of the computer to do things against very broad amounts of information and use the creativity of the human to sort of focus and narrow in on very specific things, or to say, “No, that doesn’t look right. That does,” and the way that we do so well and computers [unintelligible], you’re suddenly into things that are powerful.
So even when they’re using Watson as a medical-diagnosis system, one of the changes made is to make it so that Watson suggests possibilities to a doctor and has a little bit more explanation capability of why those possibilities, because, again, we really think it’s the human who wants to be the final decision maker, and the system is able to say, “You know, I’ve read more papers than you have recently, because each year there are thousands of medical papers that come out about cancer or whatever. So I can keep your doctor up to date with what’s being found, but you, on the other hand, have that human thing that says, ‘No, I don’t think that’s right. I think this patient is different.’”
Steven Cherry: So, for example, a doctor could see something on a scan that just looks really puzzling and interesting, and it’s the computer that could say, “Here’s three scans out of the 100 000 that I have that look exactly like that.”
Jim Hendler: Yeah, or similar to that, along some parameters. That’s exactly right. So, in other words, you want to use the computer for the thing that would be very hard for you to do, or “I’m wondering why this patient seems to have some symptom,” and the computer can tell me there’s been a recent research paper that says that symptom often correlates with some separate medication that you wouldn’t have known to look for, so ask that patient if they have that medication.
So it’s that kind of putting things together. You know, we used to use—in 9/11, everyone talked about connecting-the-dots technology. This is sort of that kind of thing in the medical domain and other domains. People are very good at pulling it together once they have the information, but finding those needles across those many haystacks is something Watson can help with.
Steven Cherry: It seems like in the long run the computer is going to be better at just more and more. I mean, at some point the computer will know which are the most interesting and puzzling things on the scan, and, you know, generally speaking, well, we had on the show earlier this year the distinguished Rice University professor Moshe Vardi, and he said he was worried about computers starting to outsmart human beings just generally. Is that something that concerns you as well?
Jim Hendler: You know, it does and it doesn’t. What helps me out is I get e-mail—I got one from a friend this morning—“Why can’t computers do this better?” Right? And the answer is that some of these things that look easy are actually really hard.
You know, people were not so impressed at the computer playing “Jeopardy!” until somebody explains to them why “Jeopardy!” is so hard. It’s not just doing a Google search. You’ve got to find the right answer, put it together, and people start saying, “Oh, yeah, I get it. That is hard.” I think computers will get better, but, again, I see them primarily getting better at the stuff I wish I didn’t have to do so much of.
I’m terrible with names. I just read yesterday that Google glasses are starting to have face recognition. Boy, would I love it when I’m walking outside and something says, “This is who that is.” But I don’t think the computer is going to tell me what interaction I had—it might tell me, you know, you met them on such and such a date. But I’m the one who’s going to remember whether we’re friends, whether we argue, that I know his kids, what kind of a social—so I think there’s a whole bunch of things we do as humans that we sometimes forget about when we focus on the computer question-answering problem-solving type stuff. I think it will be a disruptive technology, but, again, I find with Watson it needs having something that can answer the questions. It still can’t figure out what questions to ask.
Steven Cherry: You’ve worked on mobile robots. We’re on the verge of fully self-driving cars, and maybe planes too. Are you surprised at how fast things are moving—no pun intended—or how slow?
Jim Hendler: Oh, God, that’s a good question. Some of both. I mean, I would not have guessed that the self-driving cars would be at the level that they are now without significantly more investment. On the other hand, there has been more investment than I would have suggested, so I would have thought we’re a few years from what I’m seeing, but certainly not decades. But I think before that stuff is really out there in the world, we have a while to go.
Steven Cherry: Jim, I don’t know exactly when computers are going to take over the world, but Watson seems to be leading the way. So please do whatever you can to ensure that he’s a benevolent overlord, and thanks for joining us today.
Jim Hendler: Thanks much. And I’ll tell Watson to take good care of you in the future.
Steven Cherry: We’ve been speaking with RPI professor Jim Hendler about the future of machine intelligence.
For IEEE Spectrum’s “Techwise Conversations,” I’m Steven Cherry.
This interview was recorded Tuesday, 14 May 2013.
Segment producer: Barbara Finkelstein; audio engineer: Francesco Ferorelli
Read more “Techwise Conversations,” find us in iTunes, or follow us on Twitter.
NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum’s audio programming is the audio version.
To Probe Further
What Is Toronto? A few wrong answers in “Jeopardy! and a whole lot of right ones say a lot about how humans and computers will soon collaborate
The Job Market of 2045 What will we do when machines do all the work?
IBM’s Watson Goes to Med School This AI program mastered “Jeopardy!” Next up, oncology

Aucun commentaire:

Enregistrer un commentaire