It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

IBM "Watson" Demonstration on Jeopardy is a Fraud

page: 2
6
<< 1    3  4 >>

log in

join
share:

posted on Feb, 15 2011 @ 08:29 PM
link   
nevermind
edit on 2/15/2011 by Hyzera because: (no reason given)



posted on Feb, 15 2011 @ 08:33 PM
link   

Originally posted by _Del_
Based on what I saw, Watson comes up with several answers and ranks them based on how confident the answer is. It'd be interesting to see how high Chicago ranked.


I was thinking just that. I didn't see that little rating screen for this answer. Maybe it was there and I missed it.



posted on Feb, 15 2011 @ 08:35 PM
link   
possible confounding information:

from wiki's "Toronto (disambiguation)" page.
en.wikipedia.org...

* Toronto, Illinois, located south of Springfield, Illinois and to the west of Lake Springfield
* Toronto, Indiana
* Toronto, Iowa
* Toronto, Kansas
* Toronto, Missouri
* Toronto, Ohio
* Toronto, South Dakota
* Toronto, an alternate name for Tamo, Arkansas


Originally posted by randomname
SPOILER ALERT: watson loses to the humans in the final round. the category was emotions. the answer to the final jeopardy question was: what is love.


"baby don't hurt me..."



posted on Feb, 15 2011 @ 08:35 PM
link   
Wow
and to think I just watched an episode on this in my computer class haha,
talk about coincidence



posted on Feb, 15 2011 @ 08:35 PM
link   
IBM's "Watson" is not just a demostration of a computer answering questions, like some "super search engine" application .

Watson's "great leap" is in its ability to "understand" natural language, as humans, or in this case, Alex Trebeck, speak and write it.

Essentialy, this is a test of whether AI-based computers have become "smart" enough to discern the difference between "glass" (something you can see through and which is usually part of a window) and "glass" (something you can drink from).

Deriving meaning from context, in real-time, and using that understanding to correctly answer a question, in as little time as possible; that is the real challenge.

As we can see, sometimes, Watson simply misses the context by misplacing the emphasis of the words in the question; something we humans rarely do once we are familiar with the "natural" structure of language.



posted on Feb, 15 2011 @ 08:39 PM
link   

Originally posted by Hemisphere

Originally posted by _Del_
Based on what I saw, Watson comes up with several answers and ranks them based on how confident the answer is. It'd be interesting to see how high Chicago ranked.


I was thinking just that. I didn't see that little rating screen for this answer. Maybe it was there and I missed it.



If this had been a normal Jeopardy clue, Watson would not have buzzed. It had only 14% confidence in Toronto (whose Pearson airport is named for a World War II hero), and 11% in Chicago. Watson simply did not come up with the answer, and Toronto was its guess.

Even so, how could it guess that Toronto was an American city? Here we come to the weakness of statistical analysis. While searching through data, it notices that the United States is often called America. Toronto is a North American city. Its baseball team, the Blue Jays, plays in the American League. (That's why Ferrucci was wearing a Blue Jay jacket). If Watson happened to study the itinerary of my The Numerati book tour, it included a host of American cities, from Philadelphia and Pittsburgh, to Seattle, San Francisco, and Toronto. In documents like that, people often don't stop to note for inquiring computers that Toronto actually shouldn't be placed in the group.

thenumerati.net...



posted on Feb, 15 2011 @ 08:47 PM
link   

Originally posted by Bhadhidar
IBM's "Watson" is not just a demostration of a computer answering questions, like some "super search engine" application .

Watson's "great leap" is in its ability to "understand" natural language, as humans, or in this case, Alex Trebeck, speak and write it.


I remember them saying at the beginning of the first show he has no "hearing" abilities. They said he recieves all messages via a type of text messeging.



posted on Feb, 15 2011 @ 08:55 PM
link   

Originally posted by Bhadhidar
IBM's "Watson" is not just a demostration of a computer answering questions, like some "super search engine" application .

Watson's "great leap" is in its ability to "understand" natural language, as humans, or in this case, Alex Trebeck, speak and write it.



I understand that completely Bhadhidar. They went into that a bit in the show. But it is not a demonstration of one or the other exclusively. It is a blend and that's how we think. We blend and answer derivatively. The processing power of this computer should have enabled it to shift gears and cross reference enough in the time allotted to link U.S. with City correctly even if it did not immediately understand U.S. equals United States. I feel certain that the "U.S." information was in there in spades and if I'm not mistaken Watson had the category at the same time the other contestants did. It had plenty of time over the commercial break to resolve the U.S. dilemma. Just my opinion for what it's worth.

One thing that puzzled me. Why was this computer programmed for understanding "natural English" and not Chinese? Unless of course the programmers are hoping for job security, at least through the Chinese update.



posted on Feb, 15 2011 @ 09:01 PM
link   
reply to post by _Del_
 


Thanks for the posts, quotes and link _Del_.

Here is finally a web link on this subject:

Watson's Final Jeopardy Blunder In Day 2 Of IBM Challenge

I have some comments coming on this information from Huffington.



posted on Feb, 15 2011 @ 09:04 PM
link   
Come on, give Watson a pass on this. I thought it was pretty obvious that he just got way too nervous up there on stage - everybody watching him, the pressure. It just got to him. It happens.



posted on Feb, 15 2011 @ 09:04 PM
link   
reply to post by _Del_
 


I agree with what del said.

You dont really understand how computer comprehension works. Its not reading and understanding that question the way you and I do exactly. and as Del said, its a work in progress, and there were doubt as to whether it would even compete at this time. Youd be wrong if you didnt think there was continual work on the system? and there will undoubtedly be more work.

Look at deep blue the chess computer. It took a couple years before it could flatout beat a human opponent. Watson will have its time too.



posted on Feb, 15 2011 @ 09:10 PM
link   
reply to post by Hemisphere
 



It was all Watson in Day 2 of the Jeopardy IBM Challenge, until Final Jeopardy anyway. The category was "U.S. Cities" and the clue was: "Its largest airport was named for a World War II hero; its second for a World War II battle."


There is the entire question. There are enough variables within the question and this computer has more than enough power to sort this out in the time allotted. If it got hung on any segment of the question, the next bit of information, caveat, should have rerouted the search. If it hung on "Toronto" as is being explained it should have continued to complete the additional requirement of a second airport named for a World War II battle. If not that at least "second" airport.

Again, just my opinion. And..... they had to test Watson on how many thousands or tens of thousands of test questions phrased in a similar "natural language" manner? I remain unconvinced that this was not a planned flub and in the least only mildly impressed.
edit on 15-2-2011 by Hemisphere because: (no reason given)



posted on Feb, 15 2011 @ 09:23 PM
link   

Originally posted by VonDoomen
reply to post by _Del_
 


I agree with what del said.

You dont really understand how computer comprehension works. Its not reading and understanding that question the way you and I do exactly. and as Del said, its a work in progress, and there were doubt as to whether it would even compete at this time. Youd be wrong if you didnt think there was continual work on the system? and there will undoubtedly be more work.

Look at deep blue the chess computer. It took a couple years before it could flatout beat a human opponent. Watson will have its time too.


Thanks for that VD. I certainly am one of the least equipped to comment on computer systems and comprehension. Continual improvement is a very common industrial concept. It is part and parcel to ISO certification.

One also wonders how much interest there would be if the humans were whitewashed here. Thus my contention that this might have been planned. Seems a small thing as the computer was not close to losing but this flub gave some false satisfaction and peace of mind to the audience seeing that we had not yet been replaced by AI.



posted on Feb, 15 2011 @ 09:52 PM
link   

Originally posted by Hemisphere
There will be no link. I saw this in real time. ...

This had to be a programmed "miss". Why? You tell me.
Maybe you should have looked for a link, like the one _del_ found.


"Don't believe anything you hear and only half of what you see."
Does that include your OP?


Originally posted by _Del_
Found this link which explains how Watson derives answers.

thenumerati.net...

Hope you find it interesting.
Yes I found it interesting.

I hope hemisphere also finds it informative.



posted on Feb, 15 2011 @ 10:29 PM
link   
Are you people stupid? Of course there's Torontos in the United States. Edit: I see 11andrew already mentioned this, but here it is again.

Toronto, Illinois, located south of Springfield, Illinois and to the west of Lake Springfield
Toronto, Indiana
Toronto, Iowa
Toronto, Kansas
Toronto, Missouri
Toronto, Ohio
Toronto, South Dakota
Toronto, an alternate name for Tamo, Arkansas
edit on 15-2-2011 by MasonicFantom because: (no reason given)



posted on Feb, 16 2011 @ 08:16 AM
link   

Originally posted by Arbitrageur

Originally posted by Hemisphere
There will be no link. I saw this in real time. ...

This had to be a programmed "miss". Why? You tell me.
Maybe you should have looked for a link, like the one _del_ found.


"Don't believe anything you hear and only half of what you see."
Does that include your OP?


Originally posted by _Del_
Found this link which explains how Watson derives answers.

thenumerati.net...

Hope you find it interesting.
Yes I found it interesting.

I hope hemisphere also finds it informative.


Read the entire thread next time. Thanks for the duplicate info. Glad you buy the IBM line. It is after all a half-hour commercial.

My opinion is still valid. Nice try though.



posted on Feb, 16 2011 @ 08:27 AM
link   

Originally posted by MasonicFantom
Are you people stupid? Of course there's Torontos in the United States. Edit: I see 11andrew already mentioned this, but here it is again.

Toronto, Illinois, located south of Springfield, Illinois and to the west of Lake Springfield
Toronto, Indiana
Toronto, Iowa
Toronto, Kansas
Toronto, Missouri
Toronto, Ohio
Toronto, South Dakota
Toronto, an alternate name for Tamo, Arkansas
edit on 15-2-2011 by MasonicFantom because: (no reason given)


Those "Torontos" should have been ignored as they don't fit with the other caveats. I have yet to see IBM state that Watson meant Toronto, Iowa as the correct answer. Mark Twain once referred to Berlin as "The Chicago of Europe". That doesn't make Berlin a viable answer for a super computer to come up with.



posted on Feb, 16 2011 @ 08:48 AM
link   
reply to post by Hemisphere
 


It is statistics ... bayesian model etc ...

It's not 100% right but its 70%+ right

and it is more than a human being

Search engine are more relevant than a human being

Google translate , and voice translate, is also bayesian, it is not perfect but effective



posted on Feb, 16 2011 @ 09:43 AM
link   

Originally posted by psychederic
reply to post by Hemisphere
 


It is statistics ... bayesian model etc ...

It's not 100% right but its 70%+ right

and it is more than a human being

Search engine are more relevant than a human being

Google translate , and voice translate, is also bayesian, it is not perfect but effective


Understood! The blending of super computation while correctly interpretting idioms and other linguistic subtleties, and this for one language, is a needless exercise. The developing AI will quickly eliminate the need to interact with humans. This has all been discussed here on various threads. My point is that the final question was designed to stump the computer. This apparently meaningless miscue was set up as a pacifier so that the threat of further human replacement by AI was suppressed at least for now in the weak minded audience members. Again, just my opinion.

Don't you think that the term "U.S." would have or should have been explored prior to such a demonstration? Have you ever watched this program? Categories with "U.S." preceeding other terms are extremely common. "U.S. Presidents", "U.S. History", "U.S. Navy" and so on. Of all idioms to flubb on, "U.S."? If this was not intended it shows extreme incompetence.

Here is a link to an article discussing the first of these three episodes:

IBM’s Watson Almost Sneaks Wrong Response by Jeopardy’s Trebek


Watson then must push a physical buzzer to respond, just like its human competitors. While this would seem to be a task at which computers would have an overwhelming advantage, Welty noted that Rutter was so well-known for his lightning-fast buzzing that the producers weren’t even mildly concerned.


Welty, as the article states is "a member of Watson’s algorithms team". Rutter might be "lightening-fast" when compared to other human competitors. There's much being either covered up or ignored. Could it be that various aspects of these programs were scripted and so "the producers weren’t even mildly concerned". Thus no prior testing and discussion of any physical "buzz in" differential between computer and humans?

Again, this is not simply a fun demonstration. This is a commercial. A Doritos ad gets more scrutiny here.



posted on Feb, 16 2011 @ 12:11 PM
link   
reply to post by Hemisphere
 


What people don't seem to understand is that this thing is programmed by people.
And people make mistakes.
So ergo it will make mistakes too.

The question arises then. Can it transcend it's programming and learn from it's mistake.




top topics



 
6
<< 1    3  4 >>

log in

join